The relationship between Bostrom’s argument and AI X-Risk

by Sebastian Benthall

One reason why I have been writing about Bostrom’s superintelligence argument is because I am acquainted with what could be called the AI X-Risk social movement. I think it is fair to say that this movement is a subset of Effective Altruism (EA), a laudable movement whose members attempt to maximize their marginal positive impact on the world.

The AI X-Risk subset, which is a vocal group within EA, sees the emergence of a superintelligent AI as one of several risks that is notably because it could ruin everything. AI is considered to be a “global catastrophic risk” unlike more mundane risks like tsunamis and bird flu. AI X-Risk researchers argue that because of the magnitude of the consequences of the risk they are trying to anticipate, they must raise more funding and recruit more researchers.

While I think this is noble, I think it is misguided for reasons that I have been outlining in this blog. I am motivated to make these arguments because I believe that there are urgent problems/risks that are conceptually adjacent (if you will) to the problem AI X-Risk researchers study, but that the focus on AI X-Risk in particular diverts interest away from them. In my estimation, as more funding has been put into evaluating potential risks from AI many more “mainstream” researchers have benefited and taken on projects with practical value. To some extent these researchers benefit from the alarmism of the AI X-Risk community. But I fear that their research trajectory is thereby distorted from where it could truly provide maximal marginal value.

My reason for targeting Bostrom’s argument for the existential threat of superintelligent AI is that I believe it’s the best defense of the AI X-Risk thesis out there. In particular, if valid the argument should significantly raise the expected probability of an existentially risky AI outcome. For Bostrom, it is likely a natural consequence of advancement in AI research more generally because of recursive self-improvement and convergent instrumental values.

As I’ve informally work shopped this argument I’ve come upon this objection: Even if it is true that a superintelligent system would not for systematic reasons become a existentially risky singleton, that does not mean that somebody couldn’t develop such a superintelligent system in an unsystematic way. There is still an existential risk, even if it is much lower. And because existential risks are so important, surely we should prepare ourselves for even this low probability event.

There is something inescapable about this logic. However, the argument applies equally well to all kinds of potential apocalypses, such as enormous meteors crashing into the earth and biowarfare produced zombies. Without some kind of accounting of the likelihood of these outcomes, it’s impossible to do a rational budgeting.

Moreover, I have to call into question the rationality of this counterargument. If Bostrom’s arguments are used in defense of the AI X-Risk position but then the argument is dismissed as unnecessary when it is challenged, that suggests that the AI X-Risk community is committed to their cause for reasons besides Bostrom’s argument. Perhaps these reasons are unarticulated. One could come up with all kinds of conspiratorial hypotheses about why a group of people would want to disingenuously spread the idea that superintelligent AI poses an existential threat to humanity.

The position I’m defending on this blog (until somebody convinces me otherwise–I welcome all comments) is that a superintelligent AI singleton is not a significantly likely X-Risk. Other outcomes that might be either very bad or very good, such as ones with many competing and cooperating superintelligences, are much more likely. I’d argue that it’s more or less what we have today, if you consider sociotechnical organizations as a form of collective superintelligence. This makes research into this topic not only impactful in the long run, but also relevant to problems faced by people now and in the near future.