Thoughts on Fiduciary AI

by Sebastian Benthall

“Designing Fiduciary Artificial Intelligence”, by myself and David Shekman, is now on arXiv. We’re very excited to have had it accepted to Equity and Access in Algorithms, Mechanisms, and Optimization (EAMMO) ’23, a conference I’ve heard great things about. I hope the work speaks for itself. But I wanted to think “out loud” a moment about how that paper fits into my broader research arc.

I’ve been working in the technology and AI ethics space for several years, and this project sits at the intersection of what I see as several trends through that space:

  • AI alignment with human values and interests as a way of improving the safety of powerful systems, largely coming out of AI research institutes like UC Berkeley’s CHAI and, increasingly, industry labs like OpenAI and Deepmind.
  • Information fiduciary and data loyalty proposals, coming out of “privacy” scholarship. This originates with Jack Balkin, is best articulated by Richards and Hartzog, and has been intellectually engaged by Lina Khan, Julie Cohen, James Grimmelmann, and others. Its strongest legal manifestation so far is probably the E.U.’s Data Governance Act, which comes into effect this year.
  • Contextual Integrity (CI), the theory of technology ethics as contextually appropriate information flow, originating with Helen Nissenbaum. In CI, norms of information flow are legitimized by a social context’s purpose and the ends of those participating within it.

The key intuition is that these three ideas all converge on the problem of designing a system to function in the best interests of some group of people who are the designated beneficiaries in the operational context. Once this common point is recognized, it’s easy to connect the dots between many lines of literature and identify where the open problems are.

The recurring “hard part” of all this is framing the AI alignment problem clearly in terms of the duties of legally responsible actors, while still acknowledging that complying with those duties will increasingly be a matter of technical design. There is a disciplinary tendency in computer science literature to illuminate ethical concepts and translate these into technical requirements. There’s a bit of a disconnect between this literature and the implications for liability of a company that deploys AI, and for obvious reasons it’s rare for industry actors to make this connection clear, opting instead to publicize their ‘ethics’. Legal scholars, on the other hand, are quick to point out “ethics washing”, but tend to want to define regulations as broadly as possible, in order to cover a wide range of technical specifics. The more extreme critical legal scholars in this space are skeptical of any technical effort to guarantee compliance. But this leaves the technical actors with little breathing room or guidance. So these fields often talk past each other.

Fiduciary duties outside of the technical context are not controversial. They are in many ways the bedrock of our legal and economic system, and this can’t be denied with a straight face by any lawyer, corporate director, or shareholder. There is no hidden political agenda in fiduciary duties per se. So as a way to get everybody on the same page about duties and beneficiaries, I think they work.

What is inherently a political issue is whether and how fiduciary duties should be expanded to cover new categories of data technology and AI. We were deliberately agnostic about this point in our recent paper, because the work of the paper is to connect the legal and technical dots for fiduciary AI more broadly. However, at a time when many actors have been calling for more AI and data protection regulation, fiduciary duties are one important option which directly addresses the spirit of many people’s concerns.

My hope is that future work will elaborate on how AI can comply with fiduciary duties in practice, and in so doing show what the consequences of fiduciary AI policies would be. As far as I know, there is no cost benefit analysis (CBA) yet for the passing of data loyalty regulations. If the costs to industry actors were sufficiently light, and the benefits to the public sufficiently high, it might be a way to settle what is otherwise an alarming policy issue.