We seem to be in a new moment of media excitement about the implications of artificial intelligence. This time, the moment is driven by the experience of software engineers and other knowledge workers who are automating their work with ‘agents’. Clause Code etc. The latest generation of models and services is really good at doing things.
Does this change anything about my “position on AI” and superintelligence in particular?
I wrote a brief paper in 2017 about Bostrom’s Superintelligence argument. I concluded that algorithmic self-improvement at the software level would not produce superintelligence. Rather, intelligence group is limited by data and hardware.
In 2025, this conclusion still holds up, as we’ve seen that the recent impressive advances in AI has depended on tremendous capital expenditure on data centers, high-performing chips, and energy. It also depends on well-publicized efforts to collect all the text known to humankind for training data.
About 8 years ago when I was thinking about this, I wrote a bit about the connection between the Superintelligence argument and the Frankfurt School’s views on instrumental reason and capitalism. The alignment of AI with capital has born out, and has been written about by many others. What is striking about the current moment is just how on-the-nose that alignment is in the US, in terms of the full stack of energy, hardware, models, applications, and then some.
So, so far, no update.
In 2021 I published an article saying that we already had artificial systems with the capacity to outperform individual humans at many tasks. They were and still are called corporations or firms. We also had replaced markets with platform, which are similarly more performant in terms of reducing transaction costs. In that article, Jake Goldenfein and I argue that what ultimately matters are the purposes of the social system that operates the AI technology.
I believe this argument also continues to hold up. The successful models and service we are seeing are corporate accomplishments. The corporation is still the relevant unit of analysis when considering AI.
There are a number of interesting things happening now which I think are undertheorized:
What is the real economics of AI, given that the supply chains are so long and complex, consistent of both material and intellectual inputs, and the market for demand is uncertain? This is the trillion dollar question in terms of valuations, and it’s unanswered. The empirics here are not very good because things are far out of equlibrium.
Put another way: what does AI mean for the relationships between capital, corporations, labor, and consumers? Some of these relationships are mediated by rules about corporate law, intellectual property and data use, and so are determinable by law rather than technology. Information law therefor is a key point of political intervention in an economic system that is otherwise determined by laws of nature (energy, computation, etc.?
To put it another way: superintelligence has been happening and continues to happen. Some of this is due to laws of nature. But there is still a meaningful point of human intervention, which is the laws of humanity. Designing and implementing those laws well remains an important challenge.
One last thought. I’ve been inspired by Beninger’s The Control Revolution (1986) which is a historical account of the information economy in terms of cybernetics and information theory. You can ask an AI to tell you more about it, but one item comes to mind: that each new information technology first seems to threaten the jobs of people doing information work, and then leads to an expanded number of information jobs. This has to do with the way complexity is and is not managed by the technology. There’s an open question whether this generation of AI is any different. The question is truly open, but my hunch at the moment is that today’s AI systems are creating a lot more complexity than they are controlling. We will see.
Complex systems theory is a way of thinking about systems with many interacting parts and functions. It draws on physics and the science of modeling dynamic systems. It’s a trans-disciplinary, quantitative science of everything. It often and increasingly gets applied to social systems, often through the methods of agent-based modeling (ABM). ABM has a long history in computational sociology. More recently, it has made inroads into economics and finance (Axtell and Farmer, 2022). That’s important intellectual territory to win over because, of course, it is vitally important for both private and public interests. Its progress there is gradual but steady. ABM and complex systems methods have no dogma besides mathematical and computational essentials. Their eventual triumph is more or less assured. As I’ve argued, ABM and complex systems theory are thus an exciting frontier for legal theory (Benthall and Strandburg, 2021). For these reasons, one line of my research is involved in developing computational frameworks (i.e. software libraries, mathematical scaffolding) for computational social scientific modeling.
Contextual Integrity (CI) is an ethical theory developed by Helen Nissenbaum. It is especially applicable to questions of the ethics of information technology and computation. Central to the theory is the idea of “appropriate information flow”, or flows of (personal) information which conform with “information norms”. According to CI, information norms are legitimized by a balance of societal values, contextual purposes, and individual ends. The work of the CI ethicist is to wrestle with the alignments and contradictions between these alignments, purposes, and ends to identify the most legitimate norms for a given context,. When the legitimate norms are identified, it is then in principle possible to design and deploy technology in accordance with these norms.
CI is a philosophy grounded in social theory. It has never been robustly quantified and many people think this is impossible to do. I’m not among these people. In fact, much of my work is about trying to quantify or model CI. It should come as no surprise, then, that I now see CI in terms of complexity theory. It has struck me recently that what this amounts to, more or less, is a computational social theory of ethics! This idea is exciting to me, and one day I’ll want to write it down in detail. For now, I have some nice diagrams and notes from a recent presentation I wanted to share.
CI is a theory of ethics that is ultimately concerned with the way that values, purposes, and ends legitimize socially understood practices. The ethicist’s job, for CI, is to design legitimate institutions. A problem for the ethicist is that some institutions can be legitimate, but utopian, in that they are not stable behavioral patterns for the sociotechnical system. Complex systems theory, as a descriptive science, is well adapted to modeling systems and identifying the regular behaviors within them, under varying conditions. Borrowing a notion from physics, a system can exhibit many regular behavioral states, which we might call phases. For example, it is well known that water has many different phases, depending on the temperature: ice, the liquid water, steam, etc.
Norms have both descriptive and (ahem) normative dimensions. (This confusing jargon is part of why it’s so hard to make progress in this area.) In other words, for there to be an actually existing norm, it has to be both regular and, to be ethical according to CI, legitimate.
There are critics of CI who argue that one problem with it is that it assumes an apolitical consensus of information norms without addressing how norms might be distorted by, e.g., power in society. This is not terribly fair to Nissenbaum’s broader corpus of work, which certainly acknowledges political complexity (see for example the recent Nissenbaum, 2024). Suffice it to say here that not all individual end up being ‘legitimized’ when ethicists assess things, and that legitimization is always political. Moreover, individual ends and politics can, of course, often be the driver of system behavior away from legitimate institutions. We can’t always have nice things.
Nevertheless, it remains useful to consider how and under what conditions a system could remain legitimate despite technological change. This is what the original CI design heuristic is: a procedure for evaluating what to do when a new technology creates a disruptive change in societal information flows.
Ideally, for CI, when a new technology destabilizes the sociotechnical system’s behavior and threatens it with illegitimate practices, society reacts (through journalism, through ethics, through a political process, through private choices and actions, etc.) and returns the system to a regular behavioral pattern that is legitimate. This might not be the same behavior as the system started with. It might be even better. And that’s OK.
What’s bad, for CI, is if the system gets stuck in an illegitimate but still robust phase.
While there are some applications of CI that serve anodyne ends of parsing and implementing uncontroversial privacy rules, there are other uses of CI as a radical critique of the status quo. This is well exemplified by Ido Sivan-Sevilla et al.’s comments on the FTC ANPR on Commercial Surveillance and Lax Data Security Practices (2022), which is a succinct and to-the-point condemnation of the “notice and consent” practices in commercial surveillance. We live in a world in which standard, even ubiquitous, technology norms depend on “laughable legal fictions” such as the idea that users of web services are legitimate parties to contracts with vendors. It is well documented how these fictions have been enshrined into law by decades of pressure by the technology sector in courts and government (Cohen, 2019).
Together, CI and complex systems theory can show how society can be a winner, or loser, beyond the sum of individual outcomes. There are certainly those that have argued that, essentially, “there is no such thing as society”, and that voluntary, binary transactions between parties are all there is. An anarchic, libertarian, or laissez-faire system certainly serves the individual ends of some, and is to some extent stable until the lords of anarchy create new systems of rules that are in their interest. It is difficult to analyze the social costs of these political changes in terms of “individual harms”, because the true marginal cost is not measurable at the level of the individual, but rather at the level of the phase transition. A complex systems theory allows for this broader view of what is at stake.
This approach also, I think, helps convey the fragility of legitimate institutions. Nothing guarantees legitimacy. Legitimate institutions typically constrain the behavior of some actors in ways that they individually do not enjoy. There are social processes which can steer a system towards a more legitimate phase, but these will meet with resistance, sometimes fail, and can be coopted by bad faith actors serving their own ends.
Indeed, there are those who would say we do not live in a legitimate system and have not lived in one for a long time. “Legitimate for whom?” Even if this is so, CI invites us to have a productive dialog about what legitimacy would entail, by sorting out different motivations and looking at the options for balancing them out. This good faith search for resolutions is often thankless and unrewarded, but certainly we would be worse off without it. On the other hand, arguments about legitimate institutions that are divorced from realistic understandings of sociotechnical processes are easily deployed as propaganda and ideology to cover illegitimate behavior. Ethics requires a science of sociotechnical systems; sociotechnical systems are complex; complex systems theory is a solid foundation for such a science.
References
Axtell, R. L., & Farmer, J. D. (2022). Agent-based modeling in economics and finance: Past, present, and future. Journal of Economic Literature, 1-101.
Benthall, S., & Strandburg, K. J. (2021). Agent-based modeling as a legal theory tool. Frontiers in Physics, 9, 666386.
Cohen, J. E. (2019). Between truth and power. Oxford University Press.
Nissenbaum, H. (2024). AI Safety: A Poisoned Chalice?. IEEE Security & Privacy, 22(2), 94-96.
A recent Dear Colleagues Letter from the National Science Foundation Directorate for Computer and Information Science and Engineering (CISE) calls for proposals for projects to envision research priorities. It is specifically not for research itself, but for promising ways to surface and communicate new R&D directions.
Essentially, the CISE directorate is asking for people to figure out a way to identify the future of computer and information science research. Just, you know, putting it out there.
The CISE Directorate is roughly 38 years old at the time of this writing, and computing and information science have, in that time, transformed pretty much everything.
At the same time, at this present moment, there’s a sense in which computer science feels… saturated. Maybe, indeed, lacking in future vision.
Why do I feel this is so? At least two reasons:
a) In the 90’s and 00’s, so much of the potential of computer science was being discovered and unleashed by startups. Even the companies that are today Big Tech were, then, startups. Now, notoriously, a lot of startups are just weird offshoots of Big Tech companies designed to be absorbed back in when legal or market conditions are favorable. So the technical research agenda is being set by huge companies with in-house research, rather than by a loose network of innovators.
b) “Artificial Intelligence” has for a long time meant “anything that computers can’t do yet”, with the Turing Test as one of the examples of what was still an unsolved problem in computer science. Deep learning has been blasting these unsolved problems out out of the water for almost a decade now. I’d argue that the newish LLM-powered chatbots appear so ominously to be a form of “general” AI is because they command natural language so convincingly — the key challenge of the Turing Test. So, computer science is running out of unsolved problems.
c) At the same time, this so widely hyped and lauded generation of AI, which has been credited with potentially literally apocalyptic powers, has gotten over the hump of the Gartner hype cycle, and it still can’t get hands right. On the other hand, it is supposed to be making software engineering obsolete as a profession, which would in principle cut down on the demand for computer science research.
d) It is now very clear that the success of computing and information science basic research depends on its uptake in commercial and industrial settings, and that these economics depend on business, legal, and social logic that is outside the scope of computer science research per se. Computer and information science research is not successful in virtue of, but rather in spite of, its agnosticism about social context. And, increasingly, that social context is being included within the scope of computer and information science.
So, what is to be done?
One answer, which I intend seriously, is imperialism. By this I mean the expansion of computer and information science research into areas beyond its core. Another answer is that it can occupy itself by adapting to critique. I actually think a combination of both the the best answer.
By imperialism, I mean searching for unsolved problems in other sciences, and trying to crack them with computational methods. This has been done already with Go and protein folding. But most problems in the social sciences remain unsolved problems, computationally. There are indeed parts of the social sciences that are opaque to themselves and without the guiding light of computational theory.
By adapting to critique, I mean responding to the now ample critical literature, mainly produced by humanistic scholars (some legal, some STS, etc.) which aims to show the shortcomings of computer science methodology. Indeed, a lot of “information science” today operates at this critical or political level. Humanistic critique tends to stop at the level of anthropological observation.
What is not yet solved is the internalization of these critiques into computational and information theory and methods, which entail advances in the foundations of computational social science.
There are at least three research arenas that I know of which are getting at parts of these problems.
a) The Agent Foundations research agendas (e.g. PIBBSS, Causal Incentives) that have spun out of the AI Safety research communities. This work has come to understand that some foundational advances in what an agent is, in terms of computation and information, is needed to address longtermist AI safety concerns, and perhaps also more pressing problems of AI compliance in the short term. This has quite a bit of funding from Effective Altruist philanthropists.
b) Various computational institutional theory projects that can be found in the vicinity of Metagov. A lot of this is motivated by the idea of the truly self-governing digital community, a long-held Internet dream, one which got an influx of funding and interest from the blockchain boom. That blockchain/crypto flavor has left it, to some, with a funny smell. But some more academic avenues such as the Institutional Grammar Research Initiative have a more based academic stance.
c) Research into the computational foundations of agent-based modeling, such as that led by Michael Wooldridge and Anisoara Calinescu at Oxford University. Part of the interdisciplinary social science mix at the Institute for New Economic Thought, this research vein finds useful computational methods research that pushes the limits of what social systems can be modeled with computers.
The problem with social scientific problems is that they are extremely hard. They can involve multiple agents in intractable situations. Today, we have almost no social systems that are not also sociotechnical systems where the technology is creating complications, so modeling these systems is recursive and perhaps necessarily approximate. To me, these problems remain philosophically tantalizing, when so many issues seem already to be reducible to fundamentals. Maybe this is the direction of the future of computer and information science research.
Perhaps you’ve had this moment: it’s in the wee hours of the morning. You can’t sleep. The previous day was another shock to your sense of order in the universe and your place in it. You’ve begun to question your political ideals, your social responsibilities. Turning aside you see a book you read long ago that you remember gave you a sense of direction–a direction you have since repudiated. What did it say again?
I’m referring to Herbert Marcuse’s One-Dimensional Man, published in 1964.Whitfield in Dissent has a great summary of Marcuse’s career–a meteoric rise, a fast fall. He was a student of Heidegger and the Frankfurt School and applied that theory in a timely way in the 60’s.
My memory of Marcuse had been reduced to the Frankfurt School themes–technology transforming all scientific inquiry into operationalization and the resulting cultural homogeneity. I believe now that I had forgotten at least two important points.
The first is the notion of technological rationality–that pervasive technology changes what people think of as rational. This is different from instrumental rationality, which is the means ends rationality of an agent, which Frankfurt School thinkers tend to believe drive technological development and adoption. Rather, this is a claim about the effect of technology on society’s self-understanding. And example might be how the ubiquity of Facebook has changed our perception of personal privacy.
So Marcuse is very explicit about how artifacts have politics in a very thick sense, though he is rarely cited in contemporary scholarly discourse on the subject. Credit for this concept goes typically to Langdon Winner, citing his 1980 publication “Do Artifacts Have Politics?” Fred Turner’s From Counterculture to Cyberculture gives only the briefest of mention to Marcuse, despite his impact on counterculture and his concern with technology. I suppose this means the New Left, associated with Marcuse, had little to do with the emergence of cyberculture.
More significantly for me than this point was a second, which was Marcuse’s outline of the transcendental project. I’ve been thinking about this recently because I’ve met a Kantian at Berkeley and this has refreshed my interest in transcendental idealism and its intellectual consequences. In particular, Foucault described himself as one following Kant’s project, and in our discussion of Foucault in Classics it became discursively clear in a moment I may never forget precisely how well Foucault succeeded in this.
The revealing question was this. For Foucault, all knowledge exists in a particular system of discipline and power. Scientific knowledge orders reality in such and such a way, depends for its existence on institutions that establish the authority of scientists, etc. Fine. So, one asks, what system of power does Foucault’s knowledge participate in?
The only available answer is: a new one, where Foucauldeans critique existing modes of power and create discursive space for modes of life beyond existing norms. Foucault’s ideas are tools for transcending social systems and opening new social worlds.
That’s great for Foucault and we’ve seen plenty of counternormative social movements make successful use of him. But that doesn’t help with the problems of technologization of society. Here, Marcuse is more relevant. He is also much more explicit about his philosophical intentions in, for example, this account of the trancendent project:
(1) The transcendent project must be in accordance with the real possibilities open at the attained level of the material and intellectual culture.
(2) The transcendent project, in order to falsify the established totality, must demonstrate its own higher rationality in the threefold sense that
(a) it offers the prospect of preserving and improving the productive achievements of civilization;
(b) it defines the established totality in its very structure, basic tendencies, and relations;
(c) its realization offers a greater chance for the pacification of existence, within the framework of institutions which offer a greater chance for the free development of human needs and faculties.
Obviously, this notion of rationality contains, especially in the last statement, a value judgment, and I reiterate what I stated before: I believe that the very concept of Reason originates in this values judgment, and that the concept of truth cannot be divorced from the value of Reason.
I won’t apologize for Marcuse’s use of the dialect of German Idealism because if I had my way the kinds of concepts he employs and the capitalization of the word Reason would come back into common use in educated circles. Graduate school has made me extraordinarily cynical, but not so cynical that it has shaken my belief that an ideal–really any ideal–but in particular as robust an ideal as Reason is important for making society not suck, and that it’s appropriate to transmit such an ideal (and perhaps only this ideal) through the institution of the university. These are old fashioned ideas and honestly I’m not sure how I acquired them myself. But this is a digression.
My point is that in this view of societal progress, society can improve itself, but only by transcending itself and in its moment of transcendence freely choosing an alternative that expands humanity’s potential for flourishing.
“Peachy,” you say. “Where’s the so what?”
Besides that I think the transcendent project is a worthwhile project that we should collectively try to achieve? Well, there’s this: I think that most people have given up on the transcendent project and that this is a shame. Specifically, I’m disappointed in the critical project, which has since the 60’s become enshrined within the social system, for no longer aspiring to transcendence. Criticality has, alas, been recuperated. (I have in mind here, for example, what has been called critical algorithm studies)
And then there’s this: Marcuse’s insight into the transcendent project is that it has to “be in accordance with the real possibilities open at the attained level of the material and intellectual culture” and also that “it defines the established totality in its very structure, basic tendencies, and relations.” It cannot transcend anything without first including all of what is there. And this is precisely the weakness of this critical project as it now stands: that it excludes the mathematical and engineering logic that is at the heart of contemporary technics and thereby, despite its lip service to giving technology first class citizenship within its Actor Network, in fact fails to “define the established totality in its very structure, basic tendencies, and relations.” There is a very important body of theoretical work at the foundation of computer science and statistics, the theory that grounds the instrumental force and also systemic ubiquity of information technology and now data science. The continued crisis of our now very, very late modern capitalism are due partly, IMHO, by our failure to dialectically synthesize the hegemonic computational paradigm, which is not going to be defeated by ‘refusal’, with expressions of human interest that resist it.
I’m hopeful because recently I’ve learned about new research agendas that may be on to accomplishing just this. I doubt they will take on the perhaps too grandiose mantle of “the trancendent project.” But I for one would be glad if they did.
It would be easy to be discouraged by early experiments with bluestocking.
sb@lebenswelt:~/dev/bluestocking$ python factchecker.py "Courage is what makes us. Courage is what divides us. Courage is what drives us. Courage is what stops us. Courage creates news. Courage demands more. Courage creates blame. Courage brings shame. Courage shows in school. Courage determines the cool. Courage divides the weak. Courage pours out like a leak. Courage puts us on a knee. Courage makes us free. Courage makes us plea. Courage helps us flee. Corey Fauchon"
Looking up Fauchon
Lookup failed
Looking up shame
Looking up news
Looking up puts
Lookup failed
Looking up leak
Lookup failed
Looking up stops
Lookup failed
Looking up Courage
Looking up helps
Lookup failed
Looking up divides
Lookup failed
Looking up shows
Lookup failed
Looking up demands
Lookup failed
Looking up pours
Lookup failed
Looking up brings
Lookup failed
Looking up weak
Lookup failed
Looking up drives
Lookup failed
Looking up free
Looking up blame
Lookup failed
Looking up Corey
Lookup failed
Looking up plea
Lookup failed
Looking up knee
Looking up flee
Lookup failed
Looking up cool
Looking up school
Looking up determines
Lookup failed
Looking up like
Looking up us
Lookup failed
Looking up creates
Lookup failed
Looking up makes
Lookup failed
Building knowledge base
Querying knowledge base with original document
Consistency: 0
Contradictions: []
Supported: []
Novel: [(True, 'helps', 'flee'), (True, 'helps', 'us'), (True, 'determines', 'cool'), (True, 'like', 'leak'), (True, 'puts', 'knee'), (True, 'puts', 'us'), (True, 'pours', 'leak'), (True, 'pours', 'like'), (True, 'brings', 'shame'), (True, 'drives', 'us'), (True, 'stops', 'us'), (True, 'creates', 'blame'), (True, 'creates', 'news'), (True, 'Courage', 'shame'), (True, 'Courage', 'news'), (True, 'Courage', 'puts'), (True, 'Courage', 'leak'), (True, 'Courage', 'stops'), (True, 'Courage', 'helps'), (True, 'Courage', 'divides'), (True, 'Courage', 'shows'), (True, 'Courage', 'demands'), (True, 'Courage', 'pours'), (True, 'Courage', 'brings'), (True, 'Courage', 'weak'), (True, 'Courage', 'drives'), (True, 'Courage', 'free'), (True, 'Courage', 'blame'), (True, 'Courage', 'plea'), (True, 'Courage', 'knee'), (True, 'Courage', 'flee'), (True, 'Courage', 'cool'), (True, 'Courage', 'school'), (True, 'Courage', 'determines'), (True, 'Courage', 'like'), (True, 'Courage', 'us'), (True, 'Courage', 'creates'), (True, 'Courage', 'makes'), (True, 'us', 'knee'), (True, 'us', 'flee'), (True, 'us', 'plea'), (True, 'us', 'free'), (True, 'Corey', 'Fauchon'), (True, 'makes', 'plea'), (True, 'makes', 'free'), (True, 'makes', 'us'), (True, 'divides', 'weak'), (True, 'divides', 'us'), (True, 'shows', 'school')]
But, then again, our ambitions are outlandish. Nevertheless, there is a silver lining:
sb@lebenswelt:~/dev/bluestocking$ python factchecker.py "The sky is not blue."
Looking up blue
Looking up sky
Building knowledge base
Querying knowledge base with original document
Consistency: -1
Contradictions: [(True, 'sky', 'blue')]
Supported: []
Novel: []