Further distillation of Bostrom’s Superintelligence argument

by Sebastian Benthall

Following up on this outline of the definitions and core argument of Bostrom’s Superintelligence, I will try to narrow in on the key mechanisms the argument depends on.

At the heart of the argument are a number of claims about instrumentally convergent values and self-improvement. It’s important to distill these claims to their logical core because their validity affects the probability of outcomes for humanity and the way we should invest resources in anticipation of superintelligence.

There are a number of ways to tighten Bostrom’s argument:

Focus the definition of superintelligence. Bostrom leads with the provocative but fuzzy definition of superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” But the overall logic of his argument makes it clear that the domain of interest does not necessarily include violin-playing or any number of other activities. Rather, the domains necessary for a Bostrom superintelligence explosion are those that pertain directly to improving ones own intellectual capacity. Bostrom speculates about these capacities in two ways. In one section he discusses the “cognitive superpowers”, domains that would quicken a superintelligence takeoff. In another section he discusses convergent instrumental values, values that agents with a broad variety of goals would converge on instrumentally.

  • Cognitive Superpowers
    • Intelligence amplification
    • Strategizing
    • Social manipulation
    • Hacking
    • Technology research
    • Economic productivity
  • Convergent Instrumental Values
    • Self-preservation
    • Goal-content integrity
    • Cognitive enhancement
    • Technological perfection
    • Resource acquisition

By focusing on these traits, we can start to see that Bostrom is not really worried about what has been termed an “Artificial General Intelligence” (AGI). He is concerned with a very specific kind of intelligence with certain capacities to exert its will on the world and, most importantly, to increase its power over nature and other intelligent systems rapidly enough to attain a decisive strategic advantage. Which leads us to a second way we can refine Bostrom’s argument.

Closely analyze recalcitrance. Recall that Bostrom speculates that the condition for a fast takeoff superintelligence, assuming that the system engages in “intelligence amplification”, is constant or lower recalcitrance. A weakness in his argument is his lack of in-depth analysis of this recalcitrance function. I will argue that for many of the convergent instrumental values and cognitive superpowers at the core of Bostrom’s argument, it is possible to be much more precise about system recalcitrance. This analysis should allow us to determine to a greater extent the likelihood of singleton vs. multipolar superintelligence outcomes.

For example, it’s worth noting that a number of the “superpowers” are explicitly in the domain of the social sciences. “Social manipulation” and “economic productivity” are both vastly complex domains of research in their own right. Each may well have bounds about how effective an intelligent system can be at them, no matter how much “optimization power” is applied to the task. The capacities of those manipulated to understand instructions is one such bound. The fragility or elasticity of markets could be another such bound.

For intelligence amplification, strategizing, technological research/perfection, and cognitive enhancement in particular, there is a wealth of literature in artificial intelligence and cognitive science that addresses the technical limits of these domains. Such technical limitations are a natural source of recalcitrance and an impediment to fast takeoff.