An Interview with the Executive Director of the Singularity Institute
by Sebastian Benthall
Like many people, I first learned about the idea of the technological Singularity while randomly surfing the internet. It was around the year 2000. I googled “What is the meaning of life?” and found an article explaining that at the rate that artificial intelligence was progressing, we would reach a kind of computational apotheosis within fifty years. I guess at the time I thought that Google hadn’t done a bad job at answering that one, all things considered.
Since then, the Singularity’s been in the back of my mind as one of many interesting but perhaps crackpot theories of how things are going to go for us as a human race. People in my circles would dismiss it as “eschatology for nerds,” and then get back to playing Minecraft.
Since then I’ve moved to Berkeley, California, which turns out to be a hub of Singularity research. I’ve met many very smart people who are invested in reasoning about and predicting the Singularity. Though I don’t agree with all of that thinking, this exposure has given me more respect for it.
I have also learned from academic colleagues a new way to dismiss Singulatarians as “the Tea Party of the Information Society.” This piece by Evgeny Morozov is in line with this view of Singulatarianism as a kind of folk ideology used by dot-com elites to reinforce political power. (My thoughts on that piece are here.)
From where I’m standing, Singulatarianism is a controversial and a politically important world view that deserves honest intellectual scrutiny. In October, I asked Luke Muehlhauser, the Executive Director of the Singularity Institute, if I could interview him. I wanted to get a better sense of what the Singularity Institute was about and get material that could demystify Singularitarianism for others. He graciously accepted. Below is the transcript. I’ve added links where I’ve thought appropriate.
SB: Can you briefly describe the Singularity Institute?
LM: The Singularity Institute is a 501(c)(3) charity founded in the year 2000 by Eliezer Yudkowsky and some Internet entrepreneurs who supported his work for a couple years. The mission of the institute is to ensure the the creation of smarter-than-human intelligence benefits society, and the central problem we think has to do with the fact that very advanced AI’s, number one, by default will do things that humans don’t really like because humans have very complicated goals so almost all possible goals you can give an AI would be restructuring the world according to goals that are different than human goals, and then number two, the transition from human control of the planet to machine control of the planet may be very rapid because once you get an AI that is better than humans are at designing AI’s and doing AI research, then it will be able to improve its own intelligence in a loop of recursive self-improvement, and very quickly go from roughly human levels of intelligence to vastly superhuman levels of intelligence with lots of power to restructure the world according to its preferences.
SB: How did you personally get involved and what’s your role in it?
LM: I personally became involved because I was interested in the cognitive science of rationality, of changing ones mind successfully in response to evidence, and of choosing actions that are actually aimed towards achieving ones goals. Because of my interest in the subject matter I was reading the website LessWrong.com which has many articles about those subjects, and there I also encountered related material on intelligence explosion, which is this idea of a recursively self-improving artificial intelligence. And from there I read more on the subject, read a bunch of papers and articles and so on, and decided to apply to be a visiting fellow in April of 2011, or rather that’s when my visiting fellowship began, and then in September of 2011 I was hired as a researcher at the Singularity Institute, and then in November of 2011 I was made its Executive Director.
SB: So, just to clarify, is that Singularity the moment when there’s smarter than human intelligence that’s artificial?
LM: The word Singularity unfortunately has been used to mean many different things, so it is important to always clarify which meaning you are using. For our purposes you could call it the technological creation of greater than human intelligence. Other people, use it to mean something much broader and more vague, like the acceleration of technology beyond our ability to predict what will happen beyond the Singularity, or something vague like that.
SB: So what is the relationship between the artificial intelligence related question and the personal rationality related questions?
LM: Right, well the reason why the Singularity Institute has long had an interest in both rationality and safety mechanisms for artificial intelligence is that the stakes are very very high when we start thinking about artificial intelligence risks or catastrophic risks in general, and so we want our researchers to not make the kinds of cognitive mistakes that all researchers and all humans tend to make very often, which are these cognitive biases that are so well documented in psychology and behavioral economics. And so we think it’s very important for our researchers to be really world class in changing their minds in response to evidence and thinking through what the probability of different scenarios rather than going with which ones feel intuitive to us, and thinking clearly about which actions now will actually influence the future in positive ways rather than which actions will accrue status or prestige to ourselves, that sort of thing.
SB: You mentioned that some Internet entrepreneurs were involved in the starting of the organization. Who funds your organization and why do they do that?
LM: The largest single funder of the Singularity Institute is Peter Thiel, who cofounded PayPal and has been involved in several other ventures. His motivations are some concerned for existential risk, some enthusiasm for the work of our cofounder and senior researcher Eliezer Yudkowsky, and probably other reasons. Another large funder is Jaan Tallinn, the co-creator of Skype and Kazaa. He’s also concerned with existential risk and the rationality related work that we do. There are many other funders of Singularity Institute as well.
SB: Are there other organizations that do similar work?
LM: Yeah, the closest organization to what we do is the Future of Humanity Institute at Oxford University, in the United Kingdom. We collaborate with them very frequently. We go to each others’ conferences, we write papers together, and so on. The Future of Humanity Institute has a broader concern with cognitive enhancement and emerging technologies and existential risks in general, but for the past few years they have been focusing on machine superintelligence and so they’ve been working on the same issues that the Singularity Institute is devoted to. Another related organization is a new one called Global Catastrophic Risks Institute. We collaborate with them as well. And again, they are not solely focused on AI risks like the Singularity institute but on global catastrophic risks, and AI is one of them.
SB: You mentioned super human-intelligence quite a bit. Would you say that Google is a super-human intelligence?
LM: Well, yeah, so we have to be very careful about all the words that we are using of course. What I mean by intelligence is this notion of what sometimes is called optimization power, which is the ability to achieve ones goals in a wide range of environments and a wide range of constraints. And so for example, humans have a lot more optimization power than chimpanzees. That’s why even though we are slower than many animals and not as strong as many animals, we have this thing called intelligence that allows us to commence farming and science and build cities and put footprints on the moon. And so it is humans that are steering the future of the globe and not chimpanzees or stronger things like blue whales. So that’s kind of the intuitive notion. There are lots of technical papers that would be more precise. So when I am talking about super-human intelligence, I specifically mean an agent that is as good or better at humans at just about every skill set that humans possess for achieving their goals. So that would include things like not just mathematical ability or theorem proving and playing chess, but also things like social manipulation and composing music and so on, which are all functions of the brain not the kidneys.
SB: To clarify, you mentioned that humans are better than chimpanzees at achieving their goals. Do you mean humans collectively or individually? And likewise for chimpanzees.
LM: Maybe the median for chimpanzee versus the median human. There are lots of different ways that you could cash that out. I’m sure there are some humans in a vegetative state that are less effective at achieving their goals than some of the best chimpanzees.
SB: So, whatever this intelligence is, it must have goals?
LM: Yeah, well there are two ways of thinking about this. You can talk about it having a goal architecture that is explicitly written into its code that motivates its behavior. Or, that isn’t even necessary. As long as you can model its behavior as fulfilling some sort of utility function, you can describe its goals that way. In fact, that’s what we do with humans in fields like economics where you have a revealed preferences architecture. You measure a human’s preferences on a set of lotteries and from that you can extract a utility function that describes their goals. We haven’t done enough neuroscience to directly represent what humans goals are if they even have such a thing explicitly encoded in their brains.
SB: It’s interesting that you mentioned economics. So is like a corporation a kind of super-human intelligence?
LM: Um, you could model a corporation that way, except that it’s not clear that corporations are better at all humans at all different things. It would be a kind of weird corporation that was better than the best human or even the median human at all the things that humans do. Corporations aren’t usually the best in music and AI research and theory proving and stock markets and composing novels. And so there certainly are corporations that are better than median humans at certain things, like digging oil wells, but I don’t think there are corporations as good or better than humans at all things. More to the point, there is an interesting difference here because corporations are made of lots of humans and so they have the sorts of limitations on activities and intelligence that humans have. For example, they are not particularly rational in the sense defined by cognitive science. And the brains of the people that make up organizations are limited to the size of skulls, whereas you can have an AI that is the size of a warehouse. Those kinds of things.
SB: There’s a lot of industry buzz now around the term ‘big data’. I was wondering if there’s any connection between rationality or the Singularity and big data.
LM: Certainly. Big data is just another step. It provides opportunity for a lot of progress in artificial intelligence because very often it is easier to solve a problem by throwing some machine learning algorithms at a ton of data rather than trying to use your human skills for modeling a problem and coming up with an algorithm to solve it. So, big data is one of many things that, along with increased computational power, allows us to solve problems that we weren’t solving before, like machine translation or continuous speech synthesis and so on. If you give Google a trillion examples of translations from English to Chinese, then it can translate pretty well from English to Chinese without any of the programmers actually knowing Chinese.
SB: Does a super-intelligence need big data to be so super?
LM: Um, well… we don’t know because we haven’t built a super-human intelligence yet but I suspect that big data will in fact be used by the first super-human intelligences, just because big data came before super-human intelligences, it would make little sense for super-human intelligences to not avail themselves of the available techniques and resources. Such as big data. But also, such as more algorithmic insights like Bayes Nets. It would be sort of weird for a super-intelligence to not make use of the past century’s progress in probability theory.
SB: You mentioned before the transition from human control of the world to machine control of the world. How does the disproportionality of access to technology affect that if at all? For example, does the Singularity happen differently in rural India than it does in New York City?
LM: It depends a lot on what is sometimes called the ‘speed of takeoff’–whether we have a hard takeoff or a soft takeoff, or somewhere in between. To explain that, a soft takeoff would be a scenario in which you get human-level intelligence. That is, an AI that is about as good at the median human in doing the things that humans do, including composing music, doing AI research, etc. And then this breakthrough spreads quickly but still at a human time-scale as corporations replace their human workers with these human-level AI’s that are cheaper and more reliable and so on, and there is great economic and social upheaval, and the AI’s have some ability to improve their own intelligence but don’t get very far because of their own intelligence or the available computational resources and so there is a very slow transition from human control of the world to machines steering the future, where slow is on the order of years to decades.
Another possible scenario though is hard takeoff, which is once you have an AI that is better than humans at finding new insights in intelligence, it is able to improve its own intelligence roughly overnight, to find new algorithms that make it more intelligent just as we are doing now–humans are finding algorithms that make AI’s more intelligent. so now the AI is doing this, and now it has even more intelligence at its disposal to discover breakthroughs in inteligence, and then it has EVEN MORE intelligence with which to discover new breakthroughs in intelligence, and because it’s not being limited by having slow humans in the development loop, it sort of goes from roughly human levels of intelligence to vastly superhuman levels of intelligence in a matter of hours or weeks or months. And then you’ve got a machine that can engage in a global coordinated campaign to achieve its goals and neutralize the human threat to its goals in a way that happens very quickly instead of over years or decades. I don’t know which scenario will play out, so it’s hard to predict how that will go.
SB: It seems like there may be other factors besides the nature of intelligence in play. It seems like to wage a war against all humans, a hard takeoff intelligence, if I’m using the words correctly, would have to have a lot of resources available to it beyond just its intelligence.
LM: That’s right. So, that contributes to our uncertainty about how things play out. For example, does one of the first self-improving human-level artificial intelligences have access to the Internet? Or have people taken enough safety precautions that they keep it “in a box”, as they say. Then the question would be: how good is a super-human AI at manipulating its prison guards so that it can escape the box and get onto the Internet? The weakest point, hackers always know…the quickest way to get into a system is to hack the humans, because humans are stupid. So, there’s that question.
Then there’s questions like: if it gets onto the Internet, how much computing power is there available? Is there enough cheap computing power available for it to hack through a few firewalls and make a billion copies of itself overnight? Or is the computing power required for a super-human intelligence a significant fraction of the computing power available in the world, so that it can only make a few copies of itself. Another question is: what sort of resources are available for converting digital intelligence into physical actions in the human world. For example, right now you can order chemicals from a variety of labs and maybe use a bunch of emails and phone calls to intimidate a particular scientist into putting those chemicals together into a new supervirus or something, but that’s just one scenario and whenever you describe a detailed scenario like that, that particular scenario is almost certainly false and not going to happen, but there are things like that, lots of ways for digital intelligence to be converted to physical action in the world. But how many opportunities are there for that, decades from now, it’s hard to say.
SB: How do you anticipate this intelligence interacting with the social and political institutions around the Internet, supposing it gets to the Internet?
LM: Um, yeah, that’s the sort of situation where one would be tempted to start telling detailed stories about what would happen, but any detailed story would almost certainly be false. It’s really hard to say. I sort of don’t think that a super-human intelligence…if we got to a vastly smarter than human intelligence, it seems like it would probably be an extremely inefficient way for it to acheive its goals by way of causing Congress to pass a new bill somehow…that is an extremely slow and uncertain…much easier just to invent new technologies and threaten humans militarily, that sort of thing.
SB: So do you think that machine control of the world is an inevitability?
LM: Close to it. Humans are not even close to the most intelligent kind of creature you can have. They are more close to the dumbest creature you can have while also having technological civilization. If you could have a dumber creature with a technological civilization then we would be having this conversation at that level. So it looks like you can have agents that are vastly more capable of achieving their goals in the world than humans are, and there don’t seem to be any in principle barriers to doing that in machines. The usual objections that are raised like, “Will machines have intentionality?” or “Will machines have consciousness?” don’t actually matter for the question of whether they will have intelligent behavior. You don’t need intentionality or consciousness to be as good at humans at playing chess or driving cars and there’s no reason for thinking we need those things for any of the other things that we like to do. So the main factor motivating this progress is the extreme economic and military advantages to having an artificial intelligence, which will push people to develop incrementally improved systems on the way to full-blown AI. So it looks like we will get there eventually. And then it would be pretty weird situation in which you had agents that were vastly smarter than humans but that somehow humans were keeping them in cages or keeping them controlled. If we had chimpanzees running the world and humans in cages, humans would be smart enough to figure out how to break out of cages designed by chimpanzees and take over the world themselves.
SB: We are close to running out of time. There are couple more questions on my mind. One is: I think I understand that intelligence is being understood in terms of optimization power, but also that for this intelligence to count it has to be better at all things than humans….
LM: Or some large fraction of them. I’m still happy to define super-human intelligence with regard to all things that humans do, but of course for taking over the world it’s not clear that you need to be able to write novels well.
SB: Ok, so the primary sorts of goals that you are concerned about are the kinds of goals that are involved in taking over the world or are instrumental to it?
LM: Well, that’s right. And unfortunately, taking over the world is a very good idea for just about any goal that you have. Even if your goal is to maximize Exxon Mobil profits or manufacture the maximal number of paper clips or travel to a distant star, it’s a very good idea to take over the world first if you can because then you can use all available resources towards achieving your goal to the max. And also, any intelligent AI would correctly recognize that humans are the greatest threat to it achieving its goals because we will get skittish and worried about what its doing and try to shut it off. And AI will of course recognize that thats true and if it is at all intelligent will first seek to neutralize the human threat to it achieving its goals.
SB: What about intelligences that sort of use humans effectively? I’m thinking of an intelligence that was on the Internet. The Internet requires all these human actions for it to be what it is. So why would it make sense for an intelligence whose base of power was the Internet to kill all humans?
LM: Is the scenario you are imagining a kind of scenario where the AI can achieve its goals better with humans rather than neutralizing humans first? Is that what you’re asking?
SB: Yeah, I suppose.
LM: The issue is that unless you define the goals very precisely in terms of keeping humans around or benefiting humans, remember that an AI is capable of doing just about anything that humans can do and so there aren’t really things that it would need humans before unless the goal structure were specifically defined in terms of benefitting biological humans. And that’s extremely difficult to do. For example, if you found a precise way to specify “maximize human pleasure” or welfare or something, it might just mean that the AI just plugs us all into heroin drips and we never do anything cool. So it’s extremely different to specify in math–because AI’s are made of math–what it is that humans want. That gets back to the point I was making at the beginning about the complexity and fragility of human values. It turns out we don’t just value pleasure; we have this large complex of values and indeed different humans have different values from each other. So the problem of AI sort of makes an honest problem of longstanding issues in moral philosophy and value theory and so on.
SB: Ok, one last question, which is: suppose AI is taking off, and we notice that it’s taking off, and the collective intelligence of humanity working together is pitted against this artificial intelligence. Say this happens tomorrow. Who wins?
LM: Well, I mean it depends on so many unknown factors. It may be that if the intelligence is sufficiently constrained and can only improve its intelligence at a slow rate, we might actually notice that one of them is taking off and be able to pull the plug and shut it down soon enough. But that puts us in a very vulnerable state, because if one group has an AI that is capable of taking off, it probably means that other groups are only weeks or months or years or possibly decades behind. And will the correct safety precautions be taken the second, third, and twenty-fifth time?
I thank Luke Meuhlhauser for making the time for this interview. I hope to post my reflections on this at a later date.