The economy of responsibility and credit in ethical AI; also, shameless self-promotion

by Sebastian Benthall

Serious discussions about ethics and AI can be difficult because at best most people are trained in either ethics or AI, but not both. This leads to lots of confusion as a lot of the debate winds up being about who should take responsibility and credit for making the hard decisions.

Here are some of a flavors of outcomes of AI ethics discussions. Without even getting into the specifics of the content, each position serves a different constituency, despite all coming under the heading of “AI Ethics”.

  • Technical practicioners getting together to decide a set of professional standards by which to self-regulate their use of AI.
  • Ethicists getting together to decide a set of professional standards by which to regulate the practices of technical people building AI.
  • Computer scientists getting together to come up with a set of technical standards to be used in the implementation of autonomous AI so that the latter performs ethically.
  • Ethicists getting together to come up with ethical positions with which to critique the implementations of AI.

Let’s pretend for a moment that the categories used here of “computer scientists” and “ethicists” are valid ones. I’m channeling the zeitgeist here. The core motivation of “ethics in AI” is the concern that the AI that gets made will be bad or unethical for some reason. This is rumored to be because there are people who know how to create AI–the technical practicioners–who are not thinking through the ethical consequences of their work. There are supposed to be some people who are authorities on what outcomes are good and bad; I’m calling these ‘ethicists’, though I include sociologists of science and lawyers claiming an ethical authority in that term.

What are the dimensions along which these positions vary?

What is the object of the prescription? Are technical professionals having their behavior prescribed? Or is it the specification of the machine that’s being prescribed?

Who is creating the prescription? Is it “technical people” like programmers and computer scientists, or is it people ‘trained in ethics’ like lawyers and sociologists?

When is the judgment being made? Is the judgment being made before the AI system is being created as part of its production process, or is it happening after the fact when it goes live?

These dimensions are not independent from each other and in fact it’s their dependence on each other that makes the problem of AI ethics politically challenging. In general, people would like to pass on responsibility to others and take credit for themselves. Technicians love to pass responsibility to their machines–“the algorithm did it!”. Ethicists love to pass responsibility to technicians. In one view of the ideal world, ethicists would come up with a set of prescriptions, technologists would follow them, and nobody would have any ethical problems with the implementations of AI.

This would entail, more or less, that ethical requirements have been internalized into either technical design processes, engineering principles, or even mathematical specifications. This would probably be great for society as a whole. But the more ethical principles get translated into something that’s useful for engineers, the less ethicists can take credit for good technical outcomes. Some technical person has gotten into the loop and solved the problem. They get the credit, except that they are largely anonymous, and so the product, the AI system, gets the credit for being a reliable, trustworthy product. The more AI products are reliable, trustworthy, good, the less credible are the concerns of the ethicists, whose whole raison d’etre is to prevent the uninformed technologists from doing bad things.

The temptation for ethicists, then, is to sit safely where they can critique after the fact. Ethicists can write for the public condemning evil technologists without ever getting their hands dirty with the problems of implementation. There’s an audience for this and it’s a stable strategy for ethicists, but it’s not very good for society. It winds up putting public pressure on technologists to solve the problem themselves through professional self-regulation or technical specification. If they succeed, then the ethicists don’t have anything to critique, and so it is in the interest of ethicists to cast doubt on these self-regulation efforts without ever contributing to their success. Ethicists have the tricky job of pointing out that technologists are not listening to ethicists, and are therefore suspect, without ever engaging with technologists in such a way that would allow them to arrive at a bona fide ethical technical solution. This is, one must admit, not a very ethical thing to do.

There are exceptions to this bleak and cynical picture!

In fact, yours truly is an exception to this bleak and cynical picture, along with my brilliant co-authors Seda Gürses and Helen Nissenbaum! If you would like to see an honest attempt at translating ethics into computer science so that AI can be more ethical, look no further than:

Sebastian Benthall, Seda Gürses and Helen Nissenbaum (2017), “Contextual Integrity through the Lens of Computer Science”, Foundations and Trends® in Privacy and Security: Vol. 2: No. 1, pp 1-69. http://dx.doi.org/10.1561/3300000016

Contextual Integrity is an ethical framework. I’d go so far as to say that it’s a meta-ethical framework, as it provides a theory of where ethics comes from an why they are important. It’s a theory that’s developed by the esteemed ethicist and friend-of-computer-science Helen Nissenbaum.

In this paper, which you should definitely read, two researchers team up with Helen Nissenbaum to review all the computer science papers we can find that reference Contextual Integrity. One of those researchers is Seda Gürses, a computer scientist with deep background in privacy and security engineering. You essentially can’t find two researchers more credible than Helen and Seda, paired up, on the topic of how to engineer privacy (which is a subset of ethics).

I am also a co-author of this paper. You can certainly find more credible researchers on this topic than myself, but I have the enormous good fortune to have worked with such profoundly wise and respectable collaborators.

Probably the best part about this paper, in my view, is that we’ve managed to write a paper about ethics and computer science (and indeed, AI is a subset of what we are talking about in the paper) which is honestly trying to grapple with the technical challenges of designing ethical systems, while also contending with all the sociological complication of what ethics is. There’s a while section where we refuse to let computer scientists off the hook from dealing with how norms (and therefore ethics) is the result of a situated and historical process of social adaptation. But then there’s a whole other section where we talk about how developing AI that copes responsibly with the situated and historical process of social adaptation is an open research problem in privacy engineering! There’s truly something for everybody!

Advertisements