Complex systems and ethics according to contextual integrity
by Sebastian Benthall
Complex systems theory is a way of thinking about systems with many interacting parts and functions. It draws on physics and the science of modeling dynamic systems. It’s a trans-disciplinary, quantitative science of everything. It often and increasingly gets applied to social systems, often through the methods of agent-based modeling (ABM). ABM has a long history in computational sociology. More recently, it has made inroads into economics and finance (Axtell and Farmer, 2022). That’s important intellectual territory to win over because, of course, it is vitally important for both private and public interests. Its progress there is gradual but steady. ABM and complex systems methods have no dogma besides mathematical and computational essentials. Their eventual triumph is more or less assured. As I’ve argued, ABM and complex systems theory are thus an exciting frontier for legal theory (Benthall and Strandburg, 2021). For these reasons, one line of my research is involved in developing computational frameworks (i.e. software libraries, mathematical scaffolding) for computational social scientific modeling.
Contextual Integrity (CI) is an ethical theory developed by Helen Nissenbaum. It is especially applicable to questions of the ethics of information technology and computation. Central to the theory is the idea of “appropriate information flow”, or flows of (personal) information which conform with “information norms”. According to CI, information norms are legitimized by a balance of societal values, contextual purposes, and individual ends. The work of the CI ethicist is to wrestle with the alignments and contradictions between these alignments, purposes, and ends to identify the most legitimate norms for a given context,. When the legitimate norms are identified, it is then in principle possible to design and deploy technology in accordance with these norms.
CI is a philosophy grounded in social theory. It has never been robustly quantified and many people think this is impossible to do. I’m not among these people. In fact, much of my work is about trying to quantify or model CI. It should come as no surprise, then, that I now see CI in terms of complexity theory. It has struck me recently that what this amounts to, more or less, is a computational social theory of ethics! This idea is exciting to me, and one day I’ll want to write it down in detail. For now, I have some nice diagrams and notes from a recent presentation I wanted to share.
CI is a theory of ethics that is ultimately concerned with the way that values, purposes, and ends legitimize socially understood practices. The ethicist’s job, for CI, is to design legitimate institutions. A problem for the ethicist is that some institutions can be legitimate, but utopian, in that they are not stable behavioral patterns for the sociotechnical system. Complex systems theory, as a descriptive science, is well adapted to modeling systems and identifying the regular behaviors within them, under varying conditions. Borrowing a notion from physics, a system can exhibit many regular behavioral states, which we might call phases. For example, it is well known that water has many different phases, depending on the temperature: ice, the liquid water, steam, etc.

Norms have both descriptive and (ahem) normative dimensions. (This confusing jargon is part of why it’s so hard to make progress in this area.) In other words, for there to be an actually existing norm, it has to be both regular and, to be ethical according to CI, legitimate.
There are critics of CI who argue that one problem with it is that it assumes an apolitical consensus of information norms without addressing how norms might be distorted by, e.g., power in society. This is not terribly fair to Nissenbaum’s broader corpus of work, which certainly acknowledges political complexity (see for example the recent Nissenbaum, 2024). Suffice it to say here that not all individual end up being ‘legitimized’ when ethicists assess things, and that legitimization is always political. Moreover, individual ends and politics can, of course, often be the driver of system behavior away from legitimate institutions. We can’t always have nice things.
Nevertheless, it remains useful to consider how and under what conditions a system could remain legitimate despite technological change. This is what the original CI design heuristic is: a procedure for evaluating what to do when a new technology creates a disruptive change in societal information flows.

Ideally, for CI, when a new technology destabilizes the sociotechnical system’s behavior and threatens it with illegitimate practices, society reacts (through journalism, through ethics, through a political process, through private choices and actions, etc.) and returns the system to a regular behavioral pattern that is legitimate. This might not be the same behavior as the system started with. It might be even better. And that’s OK.
What’s bad, for CI, is if the system gets stuck in an illegitimate but still robust phase.

While there are some applications of CI that serve anodyne ends of parsing and implementing uncontroversial privacy rules, there are other uses of CI as a radical critique of the status quo. This is well exemplified by Ido Sivan-Sevilla et al.’s comments on the FTC ANPR on Commercial Surveillance and Lax Data Security Practices (2022), which is a succinct and to-the-point condemnation of the “notice and consent” practices in commercial surveillance. We live in a world in which standard, even ubiquitous, technology norms depend on “laughable legal fictions” such as the idea that users of web services are legitimate parties to contracts with vendors. It is well documented how these fictions have been enshrined into law by decades of pressure by the technology sector in courts and government (Cohen, 2019).
Together, CI and complex systems theory can show how society can be a winner, or loser, beyond the sum of individual outcomes. There are certainly those that have argued that, essentially, “there is no such thing as society”, and that voluntary, binary transactions between parties are all there is. An anarchic, libertarian, or laissez-faire system certainly serves the individual ends of some, and is to some extent stable until the lords of anarchy create new systems of rules that are in their interest. It is difficult to analyze the social costs of these political changes in terms of “individual harms”, because the true marginal cost is not measurable at the level of the individual, but rather at the level of the phase transition. A complex systems theory allows for this broader view of what is at stake.
This approach also, I think, helps convey the fragility of legitimate institutions. Nothing guarantees legitimacy. Legitimate institutions typically constrain the behavior of some actors in ways that they individually do not enjoy. There are social processes which can steer a system towards a more legitimate phase, but these will meet with resistance, sometimes fail, and can be coopted by bad faith actors serving their own ends.
Indeed, there are those who would say we do not live in a legitimate system and have not lived in one for a long time. “Legitimate for whom?” Even if this is so, CI invites us to have a productive dialog about what legitimacy would entail, by sorting out different motivations and looking at the options for balancing them out. This good faith search for resolutions is often thankless and unrewarded, but certainly we would be worse off without it. On the other hand, arguments about legitimate institutions that are divorced from realistic understandings of sociotechnical processes are easily deployed as propaganda and ideology to cover illegitimate behavior. Ethics requires a science of sociotechnical systems; sociotechnical systems are complex; complex systems theory is a solid foundation for such a science.
References
Axtell, R. L., & Farmer, J. D. (2022). Agent-based modeling in economics and finance: Past, present, and future. Journal of Economic Literature, 1-101.
Benthall, S., & Strandburg, K. J. (2021). Agent-based modeling as a legal theory tool. Frontiers in Physics, 9, 666386.
Cohen, J. E. (2019). Between truth and power. Oxford University Press.
Nissenbaum, H. (2024). AI Safety: A Poisoned Chalice?. IEEE Security & Privacy, 22(2), 94-96.

This sounds important. And I hope it can contribute to computational ethics.
However, on more than one reading it’s not clear how “legitimate institutions” are to be defined and/or recognized in practice. What are their characteristics? Is a rubric possible to measure, quantify and compare them? Short of this, CI appears to be a way to assess changes to existing institutions that may or may not be intrinsically legitimate. The effect of this would be to grandfather existing institutions in as the baseline definition of legitimacy, and we would not expect to be able to achieve improvements to the conventional regimes.
CI is often described as a ‘conservative’ theory in the narrow sense that it does by default endorse existing institutions as legitimate and focus on deviations. But CI originated in 2004 or so, an a lot has changed since then. I’d say Helen’s position has gotten more radical as the state of the world has drifted away from older institutional norms.
I agree that CI would be more convincing with an objectively calculus of legitimacy grounded in mathematics. That is a problem I’m currently working on.
Very helpful frame, @SebastianBenthall .
What are your thoughts on
— How to handle ethical infractions/breaches?
— …When the institution does not have a mechanism/apparent will to do so?
Thank you for your question.
In what I have presented at a high level, I have left out anything like a specific model of the agents and their dynamics. I’m being intentionally very abstract.
But in my way of thinking about it, any ‘handling’ of an ethical infraction, to be effective, has to be endogenous to those dynamics.
By ‘institution’, I do mean something like a normative (i.e., consisting of norms) pattern of behavior, and this would most often include self-referential norms of sanctioning.
To be an effective institution, it must be in some sense constitutionally autonomous or autopoietic, and that means maintaining its viability as an institution. It can have only so much tolerance for violations of its norms before it ceases to exist.
So, an institution that is not designed to enforce its own existence is a flawed institution; we would expect it to devolve into mere behavior, and lose its legitimacy.
The challenge is how to design self-sustaining legitimate institutions, and these will most likely have a corrective mechanism.
This is good stuff Seb. Good to see you fighting the good fight. Ive always seen Helen as heavily influenced by Rawls, who is of course highly influenced by Kant. Im curious if you will do better with “an objectively calculus of legitimacy grounded in mathematics”. But then you have Bourdieu (also a big influence), who rejects this universalism…. Im sure you aware of this history and are doing the best you can. Oops, I forgot I dont care about any of this any more.
Thanks, Tap. Nice to hear from you. Why don’t you care about this any more?
My position is: math is truly universal. STEM gets results in part because the foundations are solid. Putting ethics, sociology, political theory, etc. on STEM foundations is the best way to preserve them as the humanities are eroded and otherwise attacked. Bourdieu’s _Science of Science and Reflexivity_ is his account of how objectivity in science is possible even though it is situated in scientific habitus. I recommend it.
For the most part, I think I’m settled in my foundations at this point and am now trying to focus on the technical challenges implied by them. A safer path, in this intellectual climate.
I tend to agree with you, at least more then I ever did previously. All of this seems fine in a purely descriptive sense, but I dont see how you get from that to any normative ethics, at least without resorting to some form of utilarianism, which gets you into the whole tricky bit of defining utility, and all of the other ethical problems associated with utilitarianism. But at least we can all agree on Math, which is an important first step.
And that is pretty much why I dont care about this anymore… I was only interested in philosophy to the extent that it provided some normative guidance, and after reading a whole bunch of western philosophy Ive concluded that the most coherent and comprehensive thing Ive ever read on the subject is the Bhagavad Gita which is only a few thousand words and something Ive been exposed to since I was a child.
btw, I recently reconnected with our old CL99A professor, Peter Scharf. I assume you took it with him also.
Yes, I took that class with Peter Scharf. If talk with him again, please tell him that I too was profoundly affected by his class.
I’m glad you’ve settled on a compelling source of wisdom.
I think there are options for mathematizing normativity that go beyond utilitarianism.
I put to you the challenge of developing a formal model of the normative content of the Bhagavad Gita. Probably a worthwhile exercise.
Do your job without worrying about the results. There is an underlying unity to everything that you will eventually return to. You can also pray or meditate.
The normative content is in the details of dharma assignment.
The dharma assignment is tricky. But I take it as find a job you like, are good at, and can support your family. Renunciation is also important. WordPress really doesnt do well at all with threading.
Happy to continue in another medium if you like. You can also probably anticipate my next questions and the general line of inquiry.
I think you can probably anticipate my responses also, and that in general I dont think this inquiry leads anywhere more interesting then when it starts. If you are a scientist, be a scientist, and do it well! I remember you liked Habermas and communicative rationality. I think thats an excellent reason for wanting to be a scientist. Thats about as normative as it gets, I think.