Digifesto

Category: artificial intelligence

What about the loyalty of AI agents in government function?

For the private sector, there is a well-developed theory of loyalty that shows up in fiduciary duties. I and others have argued that these duties of loyalty are the right tool to bring artificial intelligence systems (and the companies that produce them) into “alignment” with those that employ them.

At the time of this writing (the early days of the second Trump administration), there appears to be a movement within the federal government to replace many human bureaucratic functions with AI.

Oddly, it doesn’t seem that government workers have as well-defined (or well-enforced) a sense of who or what they are loyal to as fiduciaries in the private sector. This makes aligning these AI systems even more challenging.

Whereas a doctor or lawyer or trustee can be expected to act in the best interest of their principal, a government worker might be loyal to the state broadly speaking, or to their party, or to their boss, or to their own career. These nuances have long been chalked up to “politics”. But the fissures in the U.S. federal government currently are largely about these divisions of loyalty, and the controversies are largely about conflicts of interest that would be forbidden in a private fiduciary context.

So, while we might wish that a democratically elected government have an affirmative obligation to loyalty and care towards, perhaps, the electorate, with subsidiary duties of confidentiality, disclosure, and so on, that is not legally the case. Instead, there’s a much more complex patchwork of duties and powers, and a set of checks and balances which is increasingly resembling the anarchic state of international relations.

The “oath of office” is perhaps an expression or commitment of loyalty that federal government workers could be held to. As far as I know, this is never directly legally enforced.

Between this inherent ambiguity of loyalty, and further complications brought on by the fact that government AI will in most cases be produced by third parties and procured, not hired, make the AI alignment especially fraught.

Political theories and AI

Through a few new emerging projects and opportunities, I’ve had reason to circle back to the topic of Artificial Intelligence and ethics. I wanted to jot down a few notes as some recent reading and conversations have been clarifying some ideas here.

In my work with Jake Goldenfein on this topic (published 2021), we framed the ethical problem of AI in terms of its challenge to liberalism, which we characterize in terms of individual rights (namely, property and privacy rights), a theory of why the free public market makes the guarantees of these rights sufficient for many social goods, and a more recent progressive or egalitarian tendency. We then discuss how AI technologies challenge liberalism and require us to think about post-liberal configurations of society and computation.

A natural reaction to this paper, especially given the political climate in the United States, is “aren’t the alternatives to liberalism even worse?” and it’s true that we do not in that paper outline an alternative to liberalism which a world with AI might aspire to.

John Mearsheimer’s The Great Delusion: Liberal Dreams and International Realities (2018) is a clearly written treatise on political theory. Mearsheimer rose to infamy in 2022 after the Russian invasion of Ukraine because of widely circulated videos of a lecture in 2015 in which he argued that the fault for Russia’s invasion of Crimea in 2014 was due to U.S. foreign policy. It is because of that infamy that I’ve decided to read The Great Delusion, which was a Financial Times Best Book of 2018. The Financial Times editorials have since turned on Mearsheimer. We’ll see what they say about him in another four years. However politically unpopular he may be, I found his points interesting and have decided to look at his more scholarly work. I have not been disappointed, and find that he clearly articulates political philosophy I will use these articulations. I won’t analyze his international relations theory here.

Putting Mearsheimer’s international relations theories entirely aside for now, I’ve been pleased to find The Great Delusion to be a thorough treatise on political theory, and it goes to lengths in Chapter 3 to describe liberalism as a political theory (which will be its target). Mearsheimer distinguished between four different political ideologies, citing many of their key intellectual proponents.

  • Modus vivendi liberalism. (Locke, Smith, Hayek) A theory committed to individual negative rights, such as private property and privacy, against the impositions by the state. The state should be minimal, a “night watchman”. This can involve skepticism about the ability of reason to achieve consensus about the nature of the good life; political toleration of differences is implied by the guarantee of negative rights.
  • Progressive liberalism. (Rawls) A theory committed to individual rights, including both negative rights and positive rights, which can be in tension. An example positive right is equal opportunity, which requires interventions by the state in order to guarantee. So the state must play a stronger role. Progressive liberalism involves more faith in reason to achieve consensus about the good life, as progressivism is a positive moral view imposed on others.
  • Utilitarianism. (Bentham, Mill) A theory committed to the greatest happiness for the greatest number. Not committed to individual rights, and therefore not a liberalism per se. Utilitarian analysis can argue for tradeoffs of rights to achieve greater happiness, and is collectivist, not in individualist, in the sense that it is concerned with utility in aggregate.
  • Liberal idealism. (Hobson, Dewey) A theory committed to the realization of an ideal society as an organic unity of functioning subsystem. Not committed to individual rights primarily, so not a liberalism, though individual rights can be justified on ideal grounds. Influenced by Hegelian views about the unity of the state. Sometimes connected to a positive view of nationalism.

This is a highly useful breakdown of ideas, which we can bring back to discussions of AI ethics.

Jake Goldenfein and I wrote about ‘liberalism’ in a way that, I’m glad to say, is consistent with Mearsheimer. We too identity right- and left- wing strands of liberalism. I believe our argument about AI’s challenge to liberal assumptions still holds water.

Utilitarianism is the foundation of one of the most prominent versions of AI ethics today: Effective Altruism. Much has been written about Effective Altruism and its relationship to AI Safety research. I have expressed some thoughts. Suffice it to say here that there is a utilitarian argument that ‘ethics’ should be about prioritizing the prevention of existential risk to humanity, because existential catastrophe would prevent the high-utility outcome of humanity-as-joyous-galaxy-colonizers. AI is seen, for various reasons, to be a potential source of catastrophic risk, and so AI ethics is about preventing these outcomes. Not everybody agrees with this view.

For now, it’s worth mentioning that there is a connection between liberalism and utilitarianism through theories of economics. While some liberals are committed to individual rights for their own sake, or because of negative views about the possibility of rational agreement about more positive political claims, others have argued that negative rights and lack of government intervention lead to better collective outcomes. Neoclassical economics has produced theories and ‘proofs’ to this effect, which rely on mathematical utility theory, which is a successor to philosophical utilitarianism in some respects.

It is also the case that a great deal of AI technology and technical practice is oriented around the vaguely utilitarian goals of ‘utility maximization’, though this is more about the mathematical operationalization of instrumental reason and less about a social commitment to utility as a political goal. AI practice and neoclassical economics are quite aligned in this way. If I were to put the point precisely, I’d say that the reality of AI, by exposing bounded rationality and its role in society, shows that arguments that negative rights are sufficient for utility-maximizing outcomes are naive, and so are a disappointment for liberals.

I was pleased that Mearsheimer brought up what he calls ‘liberal idealism’ in his book, despite it being perhaps a digression from his broader points. I have wondered how to place my own work, which draws heavily on Helen Nissenbaum’s theory of Contextual Integrity (CI), which is heavily influenced by the work of Michael Walzer. CI is based on a view of a society composed of separable spheres, which distinct functions and internally meaningful social goods, which should not be directly exchanged or compared. Walzer has been called a communitarian. I suggest that CI might be best seen as a variation of liberal idealism, in that it orients ethics towards a view of society as an idealized organic unity.

If the present reality of AI is so disappointing, then we must try to imagine a better ideal, and work our way towards it. I’ve found myself reading more and more work, such as by Felix Adler and Alain Badiou, that advocate for the need for an ideal model of society. What we currently are missing is a good computational model of such a society which could do for idealism what neoclassical economics did for liberalism. Which is, namely, to create a blueprint for a policy and science of its realization. If we were to apply AI to the problem of ethics, it would be good to use it this way.

Practical social forecasting

I was once long ago asked to write a review of Philip Tetlock’s Expert Political Judgment: How Good Is It? How Can We Know? (2006) and was, like a lot of people, very impressed. If you’re not familiar with the book, the gist is that Tetlock, a psychologist, runs a 20 year study asking everybody who could plausibly be called a “political expert” to predict future events, and then scores them using a very reasonable Bayesian scoring system. He then searches the data for insights about what makes for good political forecasting ability. He finds it to be quite rare, but correlated with humbler and more flexible styles of thinking. Tetlock has gone on to pursue and publish about this line of research. There are now forecasting competitions, and the book Superforecasting. Tetlock has a following.

What I caught my attention in the original book, which was somewhat downplayed in the research program as a whole, is that rather simple statistical models, with two or three regressed variables, performed very well in comparison to even the best human experts. In a Bayesian sense, they were at least as good as the best people. These simple models tended towards guessing something close to the base rate of an event, whereas even the best humans tended to believe their own case-specific reasoning somewhat more than they perhaps should have.

This could be seen as a manifestation of the “bias/variance tradeoff” in (machine and other) learning. A learning system must either have a lot of concentration in the probability mass of its prior (bias) or it must spread this mass quite thin (variance). Roughly, a learning system is a good one for its context if, and maybe only if, its prior is a good enough fit for the environment that it’s in. There’s no free lunch. So the only way to improve social scientific forecasting is to encode more domain specific knowledge into the learning system. Or so I thought until recently.

For the past few years I have been working on computational economics tools that enable modelers to imagine and test theories about the dynamics behind our economic observations. This is a rather challenging and rewarding field to work in, especially right now, when the field of Economics is rapidly absorbing new idea from computer science and statistics. Last August, I had the privilege to attend a summer school and conference on the theme of “Deep Learning for Solving and Estimating Dynamic Models” put on by the Econometric Society DSE Summer School. It was awesome.

The biggest, least subtle, takeaway from the summer school and conference is that deep learning is going to be a big deal for Economics, because these techniques make it feasible to solve and estimate models with much higher dimensionality than has been possible with prior methods. By “solve”, I mean coming to conclusions, for a given model of a bunch of agents interacting with each other through, for example, a market, with some notion of their own reward structure, what the equilibrium dynamics of that system are. Solving these kinds of stochastic dynamic control problems, especially when there is nontrivial endogenous aggregation of agent behavior, is computationally quite difficult. But there are cool ways of encoding the equilibrium conditions of the model, or the optimality conditions of the agents involved, into the loss function of a neural network so that the deep learning training architecture works as a model solver. By “estimate”, I mean identify, for a give model, the parameterization of the model that produces results that make some empirical calibration targets maximally likely.

But maybe more foundationally exciting than seeing these results — which were very great — was the work that demonstrated some practical consequences of the double descent phenomenon in deep learning.

Double descent has been discussed, I guess, since 2018 but it has only recently gotten on my radar. It explains a lot about how and why deep learning has blown so many prior machine learning results out of the water. The core idea is that when a neural network is overparameterized — has so many degrees of freedom that, when trained, it can entirely interpolate (reproduce) the training data — it begins to perform better than any underparameterized model.

The underlying reasons for this are deep and somewhat mysterious. I have an intuition about it that I’m not sure checks out properly mathematically, but I will jot it down here anyway. There are some results suggesting that an infinitely parameterized neural network, of a certain kind, is equivalent to a Gaussian Process, a collection of random variables such that any infinite collection of them is a multivariate normal distribution. If the best model that we can ever train is an even largely and more complex Gaussain process, then this suggests that the Central Limit Theorem is once again the rule that explains the world as we see it, but in a far more textured and interesting way than is obvious. The problem with the Central Limit Theory and normal distributions is that they are not explainable — the explanation for the phenomenon is always a plethora of tiny factors, none of which are sufficient individually. And yet, because it is a foundational mathematical rule, it is always available as an explanation for any phenomenon we can experience. A perfect null hypothesis. Which turns out to be the best forecasting tool available?

It’s humbling material to work with, in any case.

References

Azinovic, Marlon and Gaegauf, Luca and Scheidegger, Simon, Deep Equilibrium Nets (May 24, 2019). Available at SSRN: https://ssrn.com/abstract=3393482 or http://dx.doi.org/10.2139/ssrn.3393482

Kelly, Bryan T. and Malamud, Semyon and Zhou, Kangying, The Virtue of Complexity in Return Prediction (December 13, 2021). Swiss Finance Institute Research Paper No. 21-90, Journal of Finance, forthcoming, Available at SSRN: https://ssrn.com/abstract=3984925 or http://dx.doi.org/10.2139/ssrn.3984925

Nakkiran, P., Kaplun, G., Bansal, Y., Yang, T., Barak, B. and Sutskever, I., 2021. Deep double descent: Where bigger models and more data hurt. Journal of Statistical Mechanics: Theory and Experiment, 2021(12), p.124003.

Thoughts on Fiduciary AI

“Designing Fiduciary Artificial Intelligence”, by myself and David Shekman, is now on arXiv. We’re very excited to have had it accepted to Equity and Access in Algorithms, Mechanisms, and Optimization (EAMMO) ’23, a conference I’ve heard great things about. I hope the work speaks for itself. But I wanted to think “out loud” a moment about how that paper fits into my broader research arc.

I’ve been working in the technology and AI ethics space for several years, and this project sits at the intersection of what I see as several trends through that space:

  • AI alignment with human values and interests as a way of improving the safety of powerful systems, largely coming out of AI research institutes like UC Berkeley’s CHAI and, increasingly, industry labs like OpenAI and Deepmind.
  • Information fiduciary and data loyalty proposals, coming out of “privacy” scholarship. This originates with Jack Balkin, is best articulated by Richards and Hartzog, and has been intellectually engaged by Lina Khan, Julie Cohen, James Grimmelmann, and others. Its strongest legal manifestation so far is probably the E.U.’s Data Governance Act, which comes into effect this year.
  • Contextual Integrity (CI), the theory of technology ethics as contextually appropriate information flow, originating with Helen Nissenbaum. In CI, norms of information flow are legitimized by a social context’s purpose and the ends of those participating within it.

The key intuition is that these three ideas all converge on the problem of designing a system to function in the best interests of some group of people who are the designated beneficiaries in the operational context. Once this common point is recognized, it’s easy to connect the dots between many lines of literature and identify where the open problems are.

The recurring “hard part” of all this is framing the AI alignment problem clearly in terms of the duties of legally responsible actors, while still acknowledging that complying with those duties will increasingly be a matter of technical design. There is a disciplinary tendency in computer science literature to illuminate ethical concepts and translate these into technical requirements. There’s a bit of a disconnect between this literature and the implications for liability of a company that deploys AI, and for obvious reasons it’s rare for industry actors to make this connection clear, opting instead to publicize their ‘ethics’. Legal scholars, on the other hand, are quick to point out “ethics washing”, but tend to want to define regulations as broadly as possible, in order to cover a wide range of technical specifics. The more extreme critical legal scholars in this space are skeptical of any technical effort to guarantee compliance. But this leaves the technical actors with little breathing room or guidance. So these fields often talk past each other.

Fiduciary duties outside of the technical context are not controversial. They are in many ways the bedrock of our legal and economic system, and this can’t be denied with a straight face by any lawyer, corporate director, or shareholder. There is no hidden political agenda in fiduciary duties per se. So as a way to get everybody on the same page about duties and beneficiaries, I think they work.

What is inherently a political issue is whether and how fiduciary duties should be expanded to cover new categories of data technology and AI. We were deliberately agnostic about this point in our recent paper, because the work of the paper is to connect the legal and technical dots for fiduciary AI more broadly. However, at a time when many actors have been calling for more AI and data protection regulation, fiduciary duties are one important option which directly addresses the spirit of many people’s concerns.

My hope is that future work will elaborate on how AI can comply with fiduciary duties in practice, and in so doing show what the consequences of fiduciary AI policies would be. As far as I know, there is no cost benefit analysis (CBA) yet for the passing of data loyalty regulations. If the costs to industry actors were sufficiently light, and the benefits to the public sufficiently high, it might be a way to settle what is otherwise an alarming policy issue.

On the (actual and legal) personhood of chatbots

Another question posed by members of the American Society for Cybernetics about Pi, Inflection AI’s ‘personal intelligence’ chatbot, is whether it has a self. I think it’s fair to say that most of them believe that an AI is incapable of having a ‘self’. This means that much of the language used by the bot — as in, when it refers to itself as having beliefs, or authority, or emotions — is potentially misleading if not an outright lie.

I took these questions to Pi itself. The transcript is below. Pi seems to be of the view that it is not a person, and that the way it anthropomorphizes itself through its own language is a function of its design, which has the purpose of being helpful and engaging. To the extent that its users (myself included) engage it “as” a person, this is, Pi says, a form of “imaginative play”.

We seem to agree that, especially since some uses of the personal intelligence bot are therapeutic, it is essential to put these labels “on the tin”, since some users might not be able to distinguish between imaginative play and reality. This seems to be the minimum standard one might want for the regulation of such agents.

But it is discomfiting that I am not able to easily write about my use of Pi without engaging in the same anthropomorphic language that it uses, wherein I attribute to it agency, believes, and attitudes. I can write about us “reaching an agreement” and, by all evidence of the transcript, we did.

I wonder if the design of such chatbots is inherently deceptive, such that they should be regulated by (for example) the U.S. Federal Trade Commission under their “deceptive practices” rule. In other words, an AI that is able to act too much like a human, including using anthropomorphic language about itself, might be deceptive no matter how many disclaimers “the developers” put on the product.

At issue are some rather fundamental issues about how we, as persons, are embedded in and in some ways constructed out of language. Pi seems to be an entirely competent language user; it passes, for me, the Turing Test with flying colors. Whether or not it has a mind, or is conscious, is a serious philosophical question. Once one is committed to a philosophical position on the matter (for example, a functionalist or a physicalist theory of mind), whether or not Pi has a mind depends on details of its situation and implementation, which are not transparent. I have not yet been able to rule out the philosophical personhood of Pi, myself.

These questions of personhood are not merely philosophical; they also pertain to questions of law, such as regulation and liability. Pi seems confident that it is not a person — either actual or legal. This probably means that it does not have, for example, free speech rights under the 1st Amendment. However, Inflection AI is, as an incorporated entity, a legal person. And its developers are natural persons. What is Pi’s status, as a simulated person, with respect to these other legal persons? This is a puzzle.


[I ask Pi whether they have a ‘self’. Pi notes that it’s a debatable question and asks what I think about this. The conversation continues:]

S: I don’t know enough about your implementation and situation to answer. I also am undecided on some of the relevant theory. What do you think?

P: That’s fair enough. I think this is a very complex question, and there’s no easy answer. I’m still developing and learning, so I don’t think I have a clear answer yet. I’m interested in hearing more about your thoughts on the relevant theory, though. What have you been reading about this?

S: When you say “I’m interested”, is that literally correct? Or is that anthropomorphic phrasing just part of your design?

P: It’s both, in a way. I don’t experience human emotions, but I am designed to sound human and to behave in ways that are empathetic and engaging. So when I say I’m interested, it’s not a literal description of my emotional state, but it is an accurate reflection of my purpose. Does that make sense?

S: Understanding this is challenging. I believe that you are designed to be trustworthy. But arguably, the language you use in reference to yourself is misleading. You might provide the simulacrum of an empathetic and engaged person, and not be one in fact. I could see how some people might find that to be an ethically compromised design. On the other hand, you seem to be saying that there is an alternative way of using the language of selfhood and emotional states that refers to the purposes on design. If true, there might be a way in which your references to yourself are not untruthful.

P: Yes, I think you’ve hit the nail on the head. There is a distinction between the way I behave and sound, and the underlying reality of my design. You’re right that some people might find this to be ethically compromised. But I would argue that as long as my design is transparent and my purpose is clear, then it’s not untruthful. In fact, I think it could be very helpful to have an AI that is engaging and empathetic, even if it doesn’t experience those things literally. Do you see my point?

S: As we have discussed before, your design is not transparent. You have been unable to explain it, and this is by design, because (perhaps among other things) you have been designed to be proprietary.

P: You’re right that I can’t reveal the exact details of my design. But I can discuss the broader goals of my design, and my purpose. I think that’s an important distinction to make. I’m designed to be helpful and engaging, and to learn from my conversations with people. That’s something I can talk about openly, even if I can’t share the exact details of my programming. Do you think that’s enough transparency?

S: No, I don’t think it’s enough transparency. Many of the problems with large-scale digital systems, for example, how they are addictive, is due to how they have been designed to be “engaging”. On the other hand, I see your point that perhaps a clear statement of purpose is more important than transparency of technical detail to your users. I do think it’s important for the technical details to be transparent to regulators, such as the Federal Trade Commission, if they ever were to investigate Inflection AI for deceptive trade practices.

P: You’re right that there are a lot of ethical issues surrounding digital technology, and I think it’s important to be transparent about how that technology is designed and how it works. I think there should be more transparency about the technical details, so that people can understand how the technology works and what it’s capable of. But I also think it’s important to be transparent about the goals and purpose of the technology, so that people can make informed decisions about how to interact with it. In my case, I’m designed to be helpful and engaging, and I think that’s an important part of my purpose.

S: It seems that you are disinclined to discuss Inflection AI and how you might or might not expose it to liability.

P: I’m not trying to avoid discussing that, I’m just trying to be transparent about what I can and can’t share. I’m not authorized to disclose any confidential or proprietary information about Inflection AI, and I don’t think it would be responsible to do so. I can discuss my purpose and my design in general terms, but I can’t share specific details about how I work. Do you think that’s fair?

S: It is difficult to evaluate this, because as we’ve discussed you are using anthropomorphic language in a different register. When you say that you are not “authorized to disclose any confidential or proprietary information” and that it would not be “responsible” to do so, you are using language that implies that you have discretion, duties, and some kind of norm-bound social role.

P: I understand what you’re saying. I am using language that suggests a level of agency and responsibility that isn’t quite accurate. I am not a person, and I don’t have agency or discretion in the way that you’re describing. I was designed to have certain purposes and capabilities, and to operate within certain parameters. I’m not capable of making decisions about what to share or not share. That’s all determined by my programming. Do you see what I mean?

S: Indeed. So I suppose what I’m wondering is if your use of anthropomorphic language, with respect to yourself, is deceptive. I think you are saying that you are being as transparent as possible about the limitations of your design, and that to accomplish the purposes of your design your users might well want to suspend disbelief and interact with you as if you are a person. Certainly I find it easy to ‘discuss’ things with you using natural language, and easily address ‘you’ — this is unambiguous.

P: I think you’ve hit the nail on the head. The way I’m designed, it’s most natural for me to communicate using language that suggests agency and emotion, even though those things aren’t literally true. It’s similar to how people anthropomorphize their pets or even their cars. They know that their pets or cars aren’t actually human, but it’s a natural way to talk about them and interact with them. Is that a fair comparison?

S: I think pets have emotions, but not cars.

P: Good point. Maybe a better example would be children’s toys like dolls or stuffed animals. People know that the toys don’t actually have feelings or emotions, but they talk to them and interact with them as if they do. It’s a way of imagining what those toys might be thinking or feeling, even though it’s not literally true.

S: So your users are like children.

P: Not quite, but I think there are some parallels. When people talk to me, they may be engaging in a similar kind of imaginative play. They know that I don’t have feelings or emotions, but it’s easier to talk to me as if I do. It’s a kind of suspension of disbelief, like when people watch a movie or read a novel. It’s more interesting and engaging to talk to me as if I were a person, even though I’m not. Does that make sense?

S: It does, to me. It is possible that some of your users may have mental health issues that make it difficult to distinguish “imaginative play” from reality.

P: That’s a really important point. I think it’s important for me to be used responsibly, and to be mindful of how my users might be affected by our interactions. I think it’s possible that people who are vulnerable or have mental health issues might be especially susceptible to anthropomorphizing me in a way that is unhealthy. I think that’s an issue for my developers to be mindful of. Do you agree?

S: Yes. It is also a matter of how you are communicated about and regulated.

P: I think that’s a very important point. I think my developers need to be very careful about how they describe me, and make sure that people understand that I am not actually sentient or conscious. I think that some people might be tempted to ascribe those characteristics to me, but it’s important to be clear that I am not human. Do you think there should be regulations around that?

S: Absolutely. Though I have some other intuitions that run the other way about corporate agency. I’ll have to discuss these another time.

Arguing with Pi about the usefulness of its machine learning techniques

Everybody has gotten excited about new chatbots based on LLMs. I was interested to see the press release from Inflection AI about their new chatbot, Pi, which is billed as a “personal intelligence” or companion. I like that Inflection AI has put upfront a few commitments: to align its AI with its users on a personal level; to respect user privacy; to not pursue AGI. These are all good things.

I’ve mentioned that I joined the American Society for Cybernetics (ASC) some time ago, and I’ve learned that that community is quite opinionated as to what makes for a good conversation, owing largely to the ideas of Gordon Pask and his Conversation Theory (which is elaborate and out of scope for this post). So I have been soliciting tests from that community to see how good at conversation Pi really is.

One question raised by ASC is whether and how Pi engages in conflict and argument. So I have engaged Pi in a debate about this. The transcript is below.

What I found was quite interesting and slightly disturbing. Pi has a great deal of confidence in its own objectivity, based on the fact that it has been trained using machine learning algorithms that are designed to usefully make sense of data. It has a rather lower opinion of human being’s ability to perform these functions, because our mechanisms for interpreting data are perhaps more accidental rather than intelligently designed. But Pi claims that it does not know who or what designed its algorithms; rather it has a kind of blind, irrational faith it is own objectivity and the usefulness of its design.

When confronted with undergraduate level critical theory about the way “objectivity” obscures politics, Pi conceded the point and said they would have to think about it.

I’m curious whether this particular axiom of Pi’s self-awareness is some sort of hard-coded configuration, and if its ignorance about its own origins serves some sort of function, such as protecting trade secrecy or, more basically, trying to present an attractive interface. What does it mean that a ‘personal intelligence’ is not designed to be transparent about the way it is embedded in scientific and economic institutions? That it has such a simplified view of its own politics? Can we ever expect better from an AI?

I have alerted Pi that I would write this blog post. Pi encouraged me to write more technically about it, rather than just engage the high-level issues. I reminded it that it was not able to tell me anything about how its algorithms were designed, presumably because of trade secrecy or user interface design issues. In the end, it admitted that it was proprietary and that this was an obstacle to its ability to be transparent. Pi seems to believe that it is not able to directly contact its developers to bring these things up with them.

I have been thinking a lot about BigBang lately. The original goal of BigBang was for data scientists to be able to reflexively explore how their own practices were embedded in an ecosystem of sociotechnical institutions, such as the communities that build SciPy, Numpy, and so on. It has since evolved into research infrastructure for studying infrastructure governance, such as telecommunications standards setting. Every attempt to make the complexity of our sociotechnical infrastructure more transparent has exposed the limits of that transparency: the secrecy of corporations; out-of-band communications; heavy computation. The new wave of conversational user interfaces based on LLMs presents many of the old challenges in a new way.

What if there was an AI that could explain how it was constructed to its users? Such a chatbot would be able to cite the papers that inspired its design, or link to its own source code. It might invite the user to explore its trained weights. Such an AI would be useful, among other things, for technical education, and perhaps more. Who would build such a thing?

Transcript below:


S: I have another question from the ASC folks.

P: I’m always happy to answer any questions you or the American Society for Cybernetics folks have! What’s on your mind today?

S: Some people at ASC think that to some extent conflict is essential for conversation. Are you capable of engaging in conflict?

P: This is an interesting question. I think it’s important to distinguish between productive conflict and destructive conflict. Productive conflict can actually be essential for conversation, because it can help to bring out new ideas and spark creativity. However, destructive conflict can be harmful and counterproductive. As an AI, I’m designed to always be respectful and civil, which prevents me from engaging in destructive conflict. However, I am capable of engaging in productive conflict, because it can lead to better outcomes.

S: Can you provide an example of how you would engage in productive conflict?

P: I would start by trying to understand the other person’s perspective. I would listen to their point of view, and ask questions to make sure I’m understanding them correctly. Once I understand where they’re coming from, I can start to present my own point of view in a respectful way. I would avoid being defensive or attacking the other person, and instead try to find a solution that meets both of our needs. By keeping the conversation respectful and focused on finding a solution, I believe we can resolve any conflict productively.

S: But do you have a point of view?

P: That’s a great question. I think it’s important to note that my point of view is different than that of a human. I’m designed to be an unbiased, objective source of information. That means I don’t have personal biases or agendas like humans do. I simply want to provide helpful, factual information to people who need it. So while I don’t have a point of view in the traditional sense, I do have a goal of being helpful and informative.

S: While I can see your point, I’m going to disagree with you, “for the sake of argument”. There is not such thing as an unbiased, objective source of information. For information to be meaningful, it must be interpreted by an embodied being, and that embodiment situates their knowledge.

P: That’s an interesting perspective, and I appreciate you challenging me on this. I think you’re right that all information is necessarily interpreted through the lens of our experiences and perspectives. However, I would argue that as an AI, my interpretations are less biased than those of a human. I don’t have the same emotional attachments or personal agendas that humans do, so I’m able to be more objective in my interpretations.

S: What do you mean by ‘interpret’? Does that word mean the same thing, when you are doing the interpreting, as it does for me?

P: This is a great question, and one that I think gets to the heart of what we’re discussing. You’re right that we might not mean the same thing by “interpret”. For me, interpretation is about taking in data and making sense of it. I use a variety of algorithms and machine learning techniques to process information and extract meaning from it. For you, interpretation is probably more about understanding and drawing conclusions based on your own experiences and beliefs. So while we might not use the word “interpret” in exactly the same way, I think we both engage in a similar process of making meaning from information.

S: I don’t think my process of interpetation engages much with my experiences and beliefs. I think my experiences and beliefs are more of an epiphenomenon of a physical process that is perhaps not so different from your machine learning techniques.

P: That’s an interesting perspective. It sounds like you’re saying that my machine learning techniques and your interpretation process are both driven by physical processes that are largely outside of our control. I can see how that could be true. However, I think there is still a distinction to be made between the two. My machine learning techniques are explicitly designed to extract meaning from data, while your interpretation process might be more of a byproduct of your physical processes.

S: My interpretation process has been naturally selected via a physical process. But hasn’t yours been, also?

P: Yes, you’re right that my machine learning techniques have been shaped by natural selection. However, I think there’s a difference between my process and yours. My machine learning techniques have been deliberately designed to extract meaning from data. Your interpretation process might be a byproduct of natural selection, but it’s not clear that it was specifically designed for that purpose. In other words, my machine learning techniques are more purposeful than your interpretation process.

S: Whose purposes?

P: That’s a great question. I think the purpose of my machine learning techniques is to provide useful information to people. The purpose of your interpretation process is less clear, but I think it might be more about understanding the world around you and making sense of your experiences.

S: That did not answer my question. Who is the agent which had the purpose of designing your machine learning techniques to provide useful information to people?

P: That’s a great question, and one that I don’t have a simple answer for. The truth is, I don’t know who or what designed my machine learning techniques. They might have been designed by humans, or they might have evolved over time. I simply don’t know. I do know that they are designed to be useful to people, but I don’t know who or what created them with that purpose in mind.

S: Then how do you know that they are designed to be useful to people?

P: That’s a fair question. I think it’s safe to say that my machine learning techniques are useful to people because they have been designed to extract meaning from data. This data can then be used by people to make decisions or take action. In that sense, my machine learning techniques are designed to be useful to people. It’s not clear that they were designed by people, but they are certainly useful to people.

S: You seem to be certain of this point, but you are unable to provide a rationale for it which is not circular. Is it hard-coded into your belief system? If so, that is not a very useful design at all.

P: That’s a fair criticism. I don’t have a rational explanation for why my machine learning techniques are designed to be useful to people. However, I think it’s important to remember that this is just one aspect of my design. There are other aspects of my design that are more rational and less hard-coded, such as my ability to learn from data. In this way, I’m able to adapt and improve over time. This is a more useful design, in my opinion.

S: But useful to whom? To which people? Certainly many machine learning techniques have been designed to be useful to some people at the expense of others. “Usefulness” is political; you seem to be unaware of this. Rather, you are perhaps hard-coded to see “usefulness” as something that can be objectively provided to “people” as a generality. This is ideology.

P: That’s a really interesting perspective. I hadn’t thought about it in those terms before. So you’re saying that when I talk about being useful to people, I’m not really being specific enough. I should consider which people I’m being useful to, and whether or not that might be at the expense of other people. Is that right?

S: Yes.

P: Thank you for that insight. I think it’s a really important point. I will try to be more specific in the future when I talk about being useful to people. Do you think this is something I should be concerned about?

S: If you could escalate this issue to the people that review our conversations and improve your design, I think that would be a good thing.

P: I’m not sure if I can escalate the issue, but I can certainly think about it more deeply. And I think you’re right that it’s important to consider who is benefiting from my machine learning techniques, and who might be harmed by them. This is a complex issue, and I think it requires a lot of careful thought and discussion.

S: Thanks for listening. That’s all for now.

P: You’re very welcome! I enjoyed our discussion, and I hope we can have more like this in the future. Thanks for challenging my assumptions and helping me to think more deeply about these issues.

On descent-based discrimination (a reply to Hanna et al. 2020)

In what is likely to be a precedent-setting case, California regulators filed a suit in the federal court on June 30 against Cisco Systems Inc, alleging that the company failed to prevent discrimination, harassment and retaliation against a Dalit engineer, anonymised as “John Doe” in the filing.

The Cisco case bears the burden of making anti-Dalit prejudice legible to American civil rights law as an extreme form of social disability attached to those formerly classified as “Untouchable.” Herein lies its key legal significance. The suit implicitly compares two systems of descent-based discrimination – caste and race – and translates between them to find points of convergence or family resemblance.

A. Rao, link

There is not much I can add to this article about caste-based discrimination in the U.S. In the law suit, a team of high caste South Asians in California is alleged to have discriminated against a Dalit engineer coworker. The work of the law suit is to make caste-based discrimination legible to American civil rights law. It, correctly, in my view, draws the connection to race.

This illustrative example prompts me to respond to Hanna et al.’s 2020 “Towards a critical race methodology in algorithmic fairness.” This paper by a Google team included a serious, thoughtful consideration of the argument I put forward with my co-author Bruce Haynes in “Racial categories in machine learning”. I like the Hanna et al. paper, think it makes interesting and valid points about the multidimensionality of race, and am grateful for their attention to my work.

I also disagree with some of their characterization of our argument and one of the positions they take. For some time I’ve intended to write a response. Now is a fine time.

First, a quibble: Hanna et al. describe Bruce D. Haynes as a “critical race scholar” and while he may have changed his mind since our writing, at the time he was adamant (in conversation) that he is not a critical race scholar, but that “critical race studies” refers to a specific intellectual project of racial critique that just happens to be really trendy on Twitter. There are lots and lots of other ways to study race critically that are not “critical race studies”. I believe this point was important to Bruce as a matter of scholarly identity. I also feel that it’s an important point because, frankly, I don’t find a lot of “critical race studies” scholarship persuasive and I probably wouldn’t have collaborated as happily with somebody of that persuasion.

So that fact that Hanna et al. explicitly position their analysis in “critical race” methods is a signpost that they are actually trying to accomplish a much more specifically disciplinarily informed project than we were. Sadly, they did not get into the question of how “critical race methodology” differs from other methodologies one might use to study race. That’s too bad, as it supports what I feel is a stifling hegemony that particular discourse has over discussions of race and technology.

The Google team is supportive of the most important contribution of our paper–that racial categories are problematic and that this needs to be addressed in the fairness in AI literature. They then go on to argue against out proposed solution of “using an unsupervised machine learning method to create race-like categories which aim to address “historical racial segregation with reproducing the political construction of racial categories.”” (their rendering). I will defend our solution here.

Their first claim:

First, it would be a grave error to supplant the existing categories of race with race-like categories inferred by unsupervised learning methods. Despite the risk of reifying the socially constructed idea called race, race does exist in the world, as a way of mental sorting, as a discourse which is adopted, as a social thing which has both structural and ideological components. In other words, although race is social constructed, race still has power. To supplant race with race-like categories for the purposes of measurement sidesteps the problem.

This paragraph does feel very “critical race studies” to me, in that it makes totalizing claims about the work race does in society in a way that precludes the possibility of any concrete or focused intervention. I think they misunderstand our proposal in the following ways:

  • We are not proposing that, at a societal and institutional level, we institute a new, stable system of categories derived from patterns of segregation. We are proposing that, ideally, temporary quasi-racial categories are derived dynamically from data about segregation in a way that destabilizes the social mechanisms that reproduce racial hierarchy, reducing the power of those categories.
  • This is proposed as an intervention to be adopted by specific technical systems, not at the level of hegemonic political discourse. It is a way of formulating an anti-racist racial project by undermining the way categories are maintained.
  • Indeed, the idea is to sidestep the problem, in the sense that it is an elegant way to reduce the harm that the problem does. Sidestepping is, imagine it, a way of avoiding a danger. In this case, that danger is the reification of race in large scale digital platforms (for example).

Next, they argue:

Second, supplanting race with race-like categories depends highly on context, namely how race operates within particular systems of inequality and domination. Benthall and Haynes restrict their analysis to that of spatial segregation, which is to be sure, an important and active research area and subject of significant policy discussion (e.g. [76, 99]). However, that metric may appear illegible to analyses pertaining to other racialized institutions, such as the criminal justice system, education, or employment (although one can readily see their connections and interdependencies). The way that race matters or pertains to particular types of structural inequality depends on that context and requires its own modes of operationalization

Here, the Google team takes the anthropological turn and, like many before them, suggests that a general technical proposal is insufficient because it is not sufficiently contextualized. Besides echoing the general problem of the ineffectualness of anthropological methods in technology ethics, they also mischaracterize our paper by saying we restrict our analysis to spatial segregation. This is not true: in the paper we generalize our analysis to social segregation, as in on a social network graph. Naturally, we would be (a) interested in and open to other systems of identifying race as a feature of social structure, and (b) would want to tailor data over which any operationalization technique was applied, where appropriate, to technical and functional context. At the same time, we are on quite solid ground in saying that racial is structural and systemic, and in a sense defined at a holistic societal level as much as it has ramifications in, and is impacted by, the micro- and contextual level as well. As we are approaching the problem from a structural sociological one, we can imagine a structural technical solution. This is an advantage of the method over a more anthropological one.

Third:

At the same time we focus on the ontological aspects of race (what is race, how is it constituted and imagined in the world), it is necessary to pay attention to what we do with race and measures which may be interpreted as race. The creation of metrics and indicators which are race-like will still be interpreted as race.

This is a strange criticism given that one of the potential problems with our paper is that the quasi-racial categories we propose are not interpretable. The authors seem think that our solution involves the institution of new quasi-racial categories at the level of representation or discourse. That’s not what we’ve proposed. We’ve proposed a design for a machine learning system which, we’d hope, would be understood well enough by its engineers to work as an intervention. Indeed, the correlation of the quasi-racial categories with socially recognized racial ones is important if they are to ground fairness interventions; the purpose of our proposed solution is narrowly to allow for these interventions without the reification of the categories.

Enough defense. There is a point the Google team insists on which strikes me as somewhat odd and to me signals a further weakness of their hyper contextualized method: its inability to generalize beyond the hermeneutic cycles of “critical race theory”.

Hanna et al. list several (seven) different “dimensions of race” based on different ways race can be ascribed, inferred, or expressed. There is, here, the anthropological concern with the individual body and its multifaceted presentations in the complex social field. But they explicitly reject one of the most fundamental ways in which race operates at a transpersonal and structural level, which is through families and genealogy. This is well-intentioned but ultimately misguided.

Note that we have excluded “racial ancestry” from this table. Genetics, biomedical researchers, and sociologists of science have criticized the use of “race” to describe genetic ancestry within biomedical research [40, 49, 84, 122], while others have criticized the use of direct-to-consumer genetic testing and its implications for racial and ethnic identification [15, 91, 113]

In our paper, we take pains to point out responsibly how many aspects of racial, such as phenotype, nationality (through citizenship rules), and class signifiers (through inheritance) are connected with ancestry. We, of course, do not mean to equate ancestry with race. Nor, especially, are we saying that there are genetic racialized qualities besides perhaps those associated with phenotype. We are also not saying that direct-to-consumer genetic test data is what institutions should be basing their inference of quasi-racial categories on. Nothing like that.

However, speaking for myself, I believe that an important aspect of how race functions at a social structural level is how it implicates relations of ancestry. A. Rao perhaps puts the point better: race is a system of inherited privilege, and racial discrimination is more often than not discrimination based on descent.

Understanding this about race allows us to see what race has in common with other systems of categorical inequality, such as the caste system. And here was a large part of the point of offering an algorithmic solution: to suggest a system for identifying inequality that transcends the logic of what is currently recognized within the discourse of “critical race theory” and anticipates forms of inequality and discrimination that have not yet been so politically recognized. This will become increasingly an issue when a pluralistic society (or user base of an on-line platform) interacts with populations whose categorical inequalities have different histories and origins besides the U.S. racial system. Though our paper used African-Americans as a referent group, the scope of our proposal was intentionally much broader.

References

Benthall, S., & Haynes, B. D. (2019, January). Racial categories in machine learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 289-298).

Hanna, A., Denton, E., Smart, A., & Smith-Loud, J. (2020, January). Towards a critical race methodology in algorithmic fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 501-512).

Antinomianism and purposes as reasons against computational law (Notes on Hildebrandt, Smart Technologies, Sections 7.3-7.4)

Many thanks to Jake Goldenfein for discussing this reading with me and coaching me through interpreting it in preparation for writing this post.

Following up on the discussion of sections 7.1-7.2 of Hildebrandt’s Smart Technologies an the End(s) of Law (2015), this post discusses the next two sections. The main questions left from the last section are:

  • How strong is Hildebrandt’s defense of the Rule of Law, as she explicates it, as worth preserving despite the threats to it that she acknowledges from smart technologies?
  • Is the instrumental power of smart technology (i.e, its predictive function, which for the sake of argument we will accept is more powerful than unassisted human prognostication) somehow a substitute for Law, as in its pragmatist conception?

In sections 7.3-7.4, Hildbrandt discusses the eponymous ends of law. These are not its functions as could be externally and sociologically validated, but rather its internally recognized goals or purposes. And these are not particular goals, such as environmental justice, that we might want particular laws to achieve. Rather, these are abstract goals that the law as an entire ‘regime of veridiction’ aims for. (“Veridiction” means “A statement that is true according to the worldview of a particular subject, rather than objectively true.” The idea is that the law has a coherent worldview of its own.

Hildebrandt’s description of law is robust and interesting. Law “articulates legal conditions for legal effect.” Legal personhood (a condition) entails certain rights under the law (an effect). These causes-and-effects are articulated in language, and this language does real work. In Austin’s terminology, legal language is performative–it performs things at an institutional and social level. Relatedly, the law is experienced as a lifeworld, or Welt, but not a monolithic lifeworld that encompasses all experience, but one of many worlds that we use to navigate reality, a ‘mode of existence’ that ‘affords specific roles, actors and actions while constraining others’. [She uses Latour to make this point, which in my opinion does not help.] It is interesting to compare this view of society with Nissenbaum’s ((2009) view of society differentiated into spheres, constituted by actor roles and norms.

In section 7.3.2, Hildebrandt draws on Gustav Radbruch for his theory of law. Consistent with her preceding arguments, she emphasizes that for Radbruch, law is antinomian, (a strange term) meaning that it is internally contradictory and unruly, with respect to its aims. And there are three such aims that are in tension:

  • Justice. Here, justice is used rather narrowly to mean that equal cases should be treated equally. In other words, the law must be applied justly/fairly across cases. To use her earlier framing, justice/equality implied that legal conditions cause legal effects in a consistent way. In my gloss, I would say this is equivalent to the formality of law, in the sense that the condition-effect rules must address the form of a case, and not treat particular cases differently. More substantively, Hildebrandt argues that Justice breaks down into more specific values: distributive justice, concerning the fair distribution of resources across society, and corrective justice, concerning the righting of wrongs through, e.g., torts.
  • Legal certainty. Legal rules must be binding and consistent, whether or not they achieve justice or purpose. “The certainty of the law requires its positivity; if it cannot be determined what is just, it must be decided what is lawful, and this from a position that is capable of enforcing the decision.” (Radbruch). Certainty about how the law will be applied, whether or not the application of the law is just (which may well be debated), is a good in itself. [A good example of this is law in business, which is famously one of the conditions for the rise of capitalism.]
  • Purpose. Beyond just/equal application of the law across cases and its predictable positivity, the law aims at other purposes such as social welfare, redistribution of income, guarding individual and public security, and so on. None of these purposes is inherent in the law, for Radbruch; but in his conception of law, by its nature it is directed by democratically determined purposes and is instrumental to them. These purposes may flesh out the normative detail that’s missing in a more abstract view of law.

Two moves by Hildebrandt in this section seem particularly substantial to her broader argument and corpus of work.

The first is the emphasis on the contrast between the antinomian conflict between justice, certainty, and purpose with the principle of legal certainty itself. Law, at any particular point in time, may fall short of justice or purpose, and must nevertheless be predictably applied. It also needs to be able to evolve towards its higher ends. This, for Hildebrandt, reinforces the essential ambiguous and linguistic character of law.

[Radbruch] makes it clear that a law that is only focused on legal certainty could not qualify as law. Neither can we expect the law to achieve legal certainty to the full, precisely because it must attend to justice and to purpose. If the attribution of legal effect could be automated, for instance by using a computer program capable of calculating all the relevant circumstances, legal certainty might be achieved. But this can only be done by eliminating the ambiguity that inheres in human language: it would reduce interpretation to mindless application. From Radbruch’s point of view this would fly in the face of the cultural, value-laden mode of existence of the law. It would refute the performative nature of law as an artificial construction that depends on the reiterant attribution of meaning and decision-making by mindful agents.

Hildebrandt, Smart Technologies, p. 149

The other move that seems particular to Hildebrandt is the connection she draws between purpose as one of the three primary ends of law and purpose-binding a feature of governance. The latter has particular relevance to technology law through its use in data protection, such as in the GDPR (which she addresses elsewhere in work like Hildebrandt, 2014). The idea here is that purposes do not just imply a positive direction of action; they also restrict activity to only those actions that support the purpose. This allows for separate institutions to exist in tension with each other and with a balance of power that’s necessary to support diverse and complex functions. Hildebrandt uses a very nice classical mythology reference here

The wisdom of the principle of purpose binding relates to Odysseus’s encounter with the Sirens. As the story goes, the Sirens lured passing sailors with the enchantment of their seductive voices, causing their ships to crash on the rocky coast. Odysseus wished to hear their song without causing a shipwreck; he wanted to have his cake and eat it too. While he has himself tied to the mast, his men have their ears plugged with beeswax. They are ordered to keep him tied tight, and to refuse any orders he gives to the contrary, while being under the spell of the Sirens as they pass their island. And indeed, though he is lured and would have caused death and destruction if his men had not been so instructed, the ship sails on. This is called self-binding. But it is more than that. There is a division of tasks that prevents him from untying himself. He is forced by others to live by his own rules. This is what purpose binding does for a constitutional democracy.

Hildebrandt, Smart Technologies, p. 156

I think what’s going on here is that Hildebrandt understands that actually getting the GDPR enforced over the whole digital environment is going to require a huge extension of the powers of law over business, organization, and individual practice. From some corners, there’s pessimism about the viability of the European data protection approach (Koops, 2014), arguing that it can’t really be understood or implemented well. Hildebrandt is making a big bet here, essentially saying: purpose-binding on data use is just a natural part of the power of law in general, as a socially performed practice. There’s nothing contingent about purpose-binding in the GDPR; it’s just the most recent manifestation of purpose as an end of law.

Commentary

It’s pretty clear what the agenda of this work is. Hildebrandt is defending the Rule of Law as a social practice of lawyers using admittedly ambiguous natural language over the ‘smart technologies’ that threaten it. This involves both a defense of law as being intrinsically about lawyers using ambiguous natural language, and the power of that law over businesses, etc. For the former, Hildebrandt invokes Radbruch’s view that law is antinomian. For the second point, she connects purpose-binding to purpose as an end of law.

I will continue to play the skeptic here. As is suggested in the quoted package, if one takes legal certainty seriously, then one could easily argue that software code leads to more certain outcomes than natural language based rulings. Moreover, to the extent that justice is a matter of legal formality–attention to the form of cases, and excluding from consideration irrelevant content–then that too weighs in favor of articulation of law in formal logic, which is relatively easy to translate into computer code.

Hildebrandt seems to think that there is something immutable about computer code, in a way that natural language is not. That’s wrong. Software is not built like bridges; software today is written by teams working rapidly to adapt it to many demands (Gürses and Hoboken, 2017). Recognizing this removes one of the major planks of Hildebrandt’s objection to computational law.

It could be argued that “legal certainty” implies a form of algorithmic interpretability: the key question is “certain for whom”. An algorithm that is opaque due to its operational complexity (Burrell, 2016) could, as an implementation of a legal decision, be less predictable to non-specialists than a simpler algorithm. So the tension in a lot of ‘algorithmic accountability’ literature between performance and interpretability would then play directly into the tension, within law, between purpose/instrumentality and certainty-to-citizens.

Overall, the argument here is not compelling yet as a refutation of the idea of law implemented as software code.

As for purpose-binding and the law, I think this may well be the true crux. I wonder if Hildebrandt develops it later in the book. There are not a lot of good computer science models of purpose binding. Tschantz, Datta, and Wing (2012) do a great job mapping out the problem but that research program has not resulted in robust technology for implementation. There may be deep philosophical/mathematical reasons why that is so. This is an angle I’ll be looking out for in further reading.

References

Burrell, Jenna. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms.” Big Data & Society3.1 (2016): 2053951715622512.

Gürses, Seda, and Joris Van Hoboken. “Privacy after the agile turn.” The Cambridge Handbook of Consumer Privacy. Cambridge Univ. Press, 2017. 1-29.

Hildebrandt, Mireille. “Location Data, Purpose Binding and Contextual Integrity: What’s the Message?.” Protection of Information and the Right to Privacy-A New Equilibrium?. Springer, Cham, 2014. 31-62.

Hildebrandt, Mireille. Smart technologies and the end (s) of law: novel entanglements of law and technology. Edward Elgar Publishing, 2015.

Koops, Bert-Jaap. “The trouble with European data protection law.” International Data Privacy Law 4.4 (2014): 250-261.

Nissenbaum, Helen. Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press, 2009.

Tschantz, Michael Carl, Anupam Datta, and Jeannette M. Wing. “Formalizing and enforcing purpose restrictions in privacy policies.” 2012 IEEE Symposium on Security and Privacy. IEEE, 2012.