arXiv preprint of Refutation of Bostrom’s Superintelligence Argument released
I’ve written a lot of blog posts about Nick Bostrom’s book Superintelligence, presented what I think is a refutation of his core argument.
Today I’ve released an arXiv preprint with a more concise and readable version of this argument. Here’s the abstract:
Don’t Fear the Reaper: Refuting Bostrom’s Superintelligence Argument
In recent years prominent intellectuals have raised ethical concerns about the consequences of artificial intelligence. One concern is that an autonomous agent might modify itself to become “superintelligent” and, in supremely effective pursuit of poorly specified goals, destroy all of humanity. This paper considers and rejects the possibility of this outcome. We argue that this scenario depends on an agent’s ability to rapidly improve its ability to predict its environment through self-modification. Using a Bayesian model of a reasoning agent, we show that there are important limitations to how an agent may improve its predictive ability through self-modification alone. We conclude that concern about this artificial intelligence outcome is misplaced and better directed at policy questions around data access and storage.
I invite any feedback on this work.