updates and stubbornness about superintelligence
by Sebastian Benthall
We seem to be in a new moment of media excitement about the implications of artificial intelligence. This time, the moment is driven by the experience of software engineers and other knowledge workers who are automating their work with ‘agents’. Clause Code etc. The latest generation of models and services is really good at doing things.
Does this change anything about my “position on AI” and superintelligence in particular?
I wrote a brief paper in 2017 about Bostrom’s Superintelligence argument. I concluded that algorithmic self-improvement at the software level would not produce superintelligence. Rather, intelligence group is limited by data and hardware.
In 2025, this conclusion still holds up, as we’ve seen that the recent impressive advances in AI has depended on tremendous capital expenditure on data centers, high-performing chips, and energy. It also depends on well-publicized efforts to collect all the text known to humankind for training data.
About 8 years ago when I was thinking about this, I wrote a bit about the connection between the Superintelligence argument and the Frankfurt School’s views on instrumental reason and capitalism. The alignment of AI with capital has born out, and has been written about by many others. What is striking about the current moment is just how on-the-nose that alignment is in the US, in terms of the full stack of energy, hardware, models, applications, and then some.
So, so far, no update.
In 2021 I published an article saying that we already had artificial systems with the capacity to outperform individual humans at many tasks. They were and still are called corporations or firms. We also had replaced markets with platform, which are similarly more performant in terms of reducing transaction costs. In that article, Jake Goldenfein and I argue that what ultimately matters are the purposes of the social system that operates the AI technology.
I believe this argument also continues to hold up. The successful models and service we are seeing are corporate accomplishments. The corporation is still the relevant unit of analysis when considering AI.
There are a number of interesting things happening now which I think are undertheorized:
- What is the real economics of AI, given that the supply chains are so long and complex, consistent of both material and intellectual inputs, and the market for demand is uncertain? This is the trillion dollar question in terms of valuations, and it’s unanswered. The empirics here are not very good because things are far out of equlibrium.
- Put another way: what does AI mean for the relationships between capital, corporations, labor, and consumers? Some of these relationships are mediated by rules about corporate law, intellectual property and data use, and so are determinable by law rather than technology. Information law therefor is a key point of political intervention in an economic system that is otherwise determined by laws of nature (energy, computation, etc.?
To put it another way: superintelligence has been happening and continues to happen. Some of this is due to laws of nature. But there is still a meaningful point of human intervention, which is the laws of humanity. Designing and implementing those laws well remains an important challenge.
One last thought. I’ve been inspired by Beninger’s The Control Revolution (1986) which is a historical account of the information economy in terms of cybernetics and information theory. You can ask an AI to tell you more about it, but one item comes to mind: that each new information technology first seems to threaten the jobs of people doing information work, and then leads to an expanded number of information jobs. This has to do with the way complexity is and is not managed by the technology. There’s an open question whether this generation of AI is any different. The question is truly open, but my hunch at the moment is that today’s AI systems are creating a lot more complexity than they are controlling. We will see.
