Digifesto

Tag: artificial intelligence

Know-how is not interpretable so algorithms are not interpretable

I happened upon Hildreth and Kimble’s “The duality of knowledge” (2002) earlier this morning while writing this and have found it thought-provoking through to lunch.

What’s interesting is that it is (a) 12 years old, (b) a rather straightforward analysis of information technology, expert systems, ‘knowledge management’, etc. in light of solid post-Enlightenment thinking about the nature of knowledge, and (c) an anticipation of the problems of ‘interpretability’ that were a couple months ago at least an active topic of academic discussion. Or so I hear.

This is the paper’s abstract:

Knowledge Management (KM) is a field that has attracted much attention both in academic and practitioner circles. Most KM projects appear to be primarily concerned with knowledge that can be quantified and can be captured, codified and stored – an approach more deserving of the label Information Management.

Recently there has been recognition that some knowledge cannot be quantified and cannot be captured, codified or stored. However, the predominant approach to the management of this knowledge remains to try to convert it to a form that can be handled using the ‘traditional’ approach.

In this paper, we argue that this approach is flawed and some knowledge simply cannot be captured. A method is needed which recognises that knowledge resides in people: not in machines or documents. We will argue that KM is essentially about people and the earlier technology driven approaches, which failed to consider this, were bound to be limited in their success. One possible way forward is offered by Communities of Practice, which provide an environment for people to develop knowledge through interaction with others in an environment where knowledge is created nurtured and sustained.

The authors point out that Knowledge Management (KM) is an extension of the earlier program of Artificial Intelligence, depends on a model of knowledge that maintains that knowledge can be explicitly represented and hence stored and transfered, and propose an alternative way of thinking about things based on the Communities of Practice framework.

A lot of their analysis is about the failures of “expert systems”, which is a term that has fallen out of use but means basically the same thing as the contemporary uncomputational scholarly use of ‘algorithm’. An expert system was a computer program designed to make decisions about things. Broadly speaking, a search engine is a kind of expert system. What’s changed are the particular techniques and algorithms that such systems employ, and their relationship with computing and sensing hardware.

Here’s what Hildreth and Kimble have to say about expert systems in 2002:

Viewing knowledge as a duality can help to explain the failure of some KM initiatives. When the harder aspects are abstracted in isolation the representation is incomplete: the softer aspects of knowledge must also be taken into account. Hargadon (1998) gives the example of a server holding past projects, but developers do not look there for solutions. As they put it, ‘the important knowledge is all in people’s heads’, that is the solutions on the server only represent the harder aspects of the knowledge. For a complete picture, the softer aspects are also necessary. Similarly, the expert systems of the 1980s can be seen as failing because they concentrated solely on the harder aspects of knowledge. Ignoring the softer aspects meant the picture was incomplete and the system could not be moved from the environment in which it was developed.

However, even knowledge that is ‘in people’s heads’ is not sufficient – the interactive aspect of Cook and Seely Brown’s (1999) ‘knowing’ must also be taken into account. This is one of the key aspects to the management of the softer side to knowledge.

In 2002, this kind of argument was seen as a valuable critique of artificial intelligence and the practices based on it as a paradigm. But already by 2002 this paradigm was falling away. Statistical computing, reinforcement learning, decision tree bagging, etc. were already in use at this time. These methods are “softer” in that they don’t require the “hard” concrete representations of the earlier artificial intelligence program, which I believe by that time was already refered to as “Good Old Fashioned AI” or GOFAI by a number of practicioners.

(I should note–that’s a term I learned while studying AI as an undergraduate in 2005.)

So throughout the 90’s and the 00’s, if not earlier, ‘AI’ transformed into ‘machine learning’ and become the implementation of ‘soft’ forms of knowledge. These systems are built to learn to perform a task optimally based flexibly on feedback from past performance. They are in fact the cybernetic systems imagined by Norbert Wiener.

Perplexing, then, is the contemporary problem that the models created by these machine learning algorithms are opaque to their creators. These models were created using techniques that were designed precisely to solve the problems that systems based on explicit, communicable knowledge were meant to solve.

If you accept the thesis that contemporary ‘algorithms’-driven systems are well-designed implementations of ‘soft’ knowledge systems, then you get some interesting conclusions.

First, forget about interpeting the learned models of these systems and testing them for things like social discrimination, which is apparently in vogue. The right place to focus attention is on the function being optimized. All these feedback-based systems–whether they be based on evolutionary algorithms, or convergence on local maxima, or reinforcement learning, or whatever–are designed to optimize some goal function. That goal function is the closest thing you will get to an explicit representation of the purpose of the algorithm. It may change over time, but it should be coded there explicitly.

Interestingly, this is exactly the sense of ‘purpose’ that Wiener proposed could be applied to physical systems in his landmark essay, published with Rosenbleuth and Bigelow, “Purpose, Behavior, and Teleology.” In 1943. Sly devil.

EDIT: An excellent analysis of how fairness can be represented as an explicit goal function can be found in Dwork et al. 2011.

Second, because what the algorithms is designed to optimize is generally going to be something like ‘maximize ad revenue’ and not anything particularly explicitly pernicious like ‘screw over the disadvantaged people’, this line of inquiry will raise some interesting questions about, for example, the relationship between capitalism and social justice. By “raise some interesting questions”, I mean, “reveal some uncomfortable truths everyone is already aware of”. Once it becomes clear that the whole discussion of “algorithms” and their inscrutability is just a way of talking about societal problems and entrenched political interests without talking about it, it will probably be tabled due to its political infeasibility.

That is (and I guess this is the third point) unless somebody can figure out how to explicitly define the social justice goals of the activists/advocates into a goal function that could be implemented by one of these soft-touch expert systems. That would be rad. Whether anybody would be interested in using or investing in such a system is an important open question. Not a wide open question–the answer is probably “Not really”–but just open enough to let some air onto the embers of my idealism.

objective properties of text and robot scientists

One problem with having objectivity as a scientific goal is that it may be humanly impossible.

One area where this comes up is in the reading of a text. To read is to interpret, and it is impossible to interpret without bringing ones own concepts and experience to bear on the interpretation. This introduces partiality.

This is one reason why Digital Humanities are interesting. In Digital Humanities, one is using only the objective properties of the text–its data as a string of characters and its metadata. Semantic analysis is reduced to a study of a statistical distribution over words.

An odd conclusion: the objective scientific subject won’t be a human intelligence at all. It will need to be a robot. Its concepts may never be interpretable by humans because any individual human is too small-minded or restricted in their point of view to understand the whole.

Looking at the history of cybernetics, artificial intelligence, and machine learning, we can see the progression of a science dedicated to understanding the abstract properties of an idealized, objective learner. That systems such as these underly the infrastructure we depend on for the organization of society is a testament to their success.

It all comes back to Artificial Intelligence

I am blessed with many fascinating conversations every week. Because of the field I am in, these conversations are mainly about technology and people and where they intersect.

Sometimes they are about philosophical themes like how we know anything, or what is ethical. These topics are obviously relevant to an academic researcher, especially when one is interested in computational social science, a kind of science whose ethics have lately been called into question. Other times they are about the theoretical questions that such a science should or could address, like: how do we identify leaders? Or determine what are the ingredients for a thriving community? What is creativity, and how can we mathematically model how it arises from social interaction?

Sometimes the conversations are political. Is it a problem that algorithms are governing more of our political lives and culture? If so, what should we do about it?

The richest and most involved conversations, though, are about artificial intelligence (AI). As a term, it has fallen out of fashion. I was very surprised to see it as a central concept in Bengio et al.’s “Representation Learning: A Review and New Perspectives” [arXiv]. In most discussion scientific computing or ‘data science’ for the most part people have abandoned the idea of intelligent machines. Perhaps this is because so many of the applications of this technology seem so prosaic now. Curating newsfeeds, for example. That can’t be done intelligently. That’s just an algorithm.

Never mind that the origins of all of what we now call machine learning was in the AI research program, which is as old as computer science itself and really has grown up with it. Marvin Minsky famously once defined artificial intelligence as ‘whatever humans still do better than computers.’ And this is the curse of the field. With every technological advance that is at the time mind-blowingly powerful, performing a task that it used to require hundreds of people to perform, it very shortly becomes mere technology.

It’s appropriate then that representation learning, the problem of deriving and selecting features from a complex data set that are valuable for other kinds of statistical analysis in other tasks, is brought up in the context of AI. Because this is precisely the sort of thing that people still think they are comparatively good at. A couple years ago, everyone was talking about the phenomenon of crowdsourced image tagging. People are better at seeing and recognizing objects in images than computers, so in order to, say, provide the data for Google’s Image search, you still need to mobilize lots of people. You just have to organize them as if they were computer functions so that you can properly aggregate their results.

On of the earliest tasks posed to AI, the Turing Test, proposed and named after Alan Turing, the inventor of the fricking computer, is the task of engaging in conversation as if one is a human. This is harder than chess. It is harder than reading handwriting. Something about human communication is so subtle that it has withstood the test of time as an unsolved problem.

Until June of this year, when a program passed the Turing Test in the annual competition. Conversation is no longer something intelligent. It can be performed by a mere algorithm. Indeed, I have heard that a lot of call centers now use scripted dialog. An operator pushes buttons guiding the caller through a conversation that has already been written for them.

So what’s next?

I have a proposal: software engineering. We still don’t have an AI that can write its own source code.

How could we create such an AI? We could use machine learning, training it on data. What’s amazing is that we have vast amounts of data available on what it is like to be a functioning member of a software development team. Open source software communities have provided an enormous corpus of what we can guess is some of the most complex and interesting data ever created. Among other things, this software includes source code for all kinds of other algorithms that were once considered AI.

One reason why I am building BigBang, a toolkit for the scientific analysis of software communities, is because I believe it’s the first step to a better understanding of this very complex and still intelligent process.

While above I have framed AI pessimistically–as what we delegate away from people to machines, that is unnecessarily grim. In fact, with every advance in AI we have come to a better understanding of our world and how we see, hear, think, and do things. The task of trying to scientifically understand how we create together and the task of developing an AI to create with us is in many ways the same task. It’s just a matter of how you look at it.

An Interview with the Executive Director of the Singularity Institute

Like many people, I first learned about the idea of the technological Singularity while randomly surfing the internet. It was around the year 2000. I googled “What is the meaning of life?” and found an article explaining that at the rate that artificial intelligence was progressing, we would reach a kind of computational apotheosis within fifty years. I guess at the time I thought that Google hadn’t done a bad job at answering that one, all things considered.

Since then, the Singularity’s been in the back of my mind as one of many interesting but perhaps crackpot theories of how things are going to go for us as a human race. People in my circles would dismiss it as “eschatology for nerds,” and then get back to playing Minecraft.

Since then I’ve moved to Berkeley, California, which turns out to be a hub of Singularity research. I’ve met many very smart people who are invested in reasoning about and predicting the Singularity. Though I don’t agree with all of that thinking, this exposure has given me more respect for it.

I have also learned from academic colleagues a new way to dismiss Singulatarians as “the Tea Party of the Information Society.” This piece by Evgeny Morozov is in line with this view of Singulatarianism as a kind of folk ideology used by dot-com elites to reinforce political power. (My thoughts on that piece are here.)

From where I’m standing, Singulatarianism is a controversial and a politically important world view that deserves honest intellectual scrutiny. In October, I asked Luke Muehlhauser, the Executive Director of the Singularity Institute, if I could interview him. I wanted to get a better sense of what the Singularity Institute was about and get material that could demystify Singularitarianism for others. He graciously accepted. Below is the transcript. I’ve added links where I’ve thought appropriate.


SB: Can you briefly describe the Singularity Institute?

LM: The Singularity Institute is a 501(c)(3) charity founded in the year 2000 by Eliezer Yudkowsky and some Internet entrepreneurs who supported his work for a couple years. The mission of the institute is to ensure the the creation of smarter-than-human intelligence benefits society, and the central problem we think has to do with the fact that very advanced AI’s, number one, by default will do things that humans don’t really like because humans have very complicated goals so almost all possible goals you can give an AI would be restructuring the world according to goals that are different than human goals, and then number two, the transition from human control of the planet to machine control of the planet may be very rapid because once you get an AI that is better than humans are at designing AI’s and doing AI research, then it will be able to improve its own intelligence in a loop of recursive self-improvement, and very quickly go from roughly human levels of intelligence to vastly superhuman levels of intelligence with lots of power to restructure the world according to its preferences.

SB: How did you personally get involved and what’s your role in it?

LM: I personally became involved because I was interested in the cognitive science of rationality, of changing ones mind successfully in response to evidence, and of choosing actions that are actually aimed towards achieving ones goals. Because of my interest in the subject matter I was reading the website LessWrong.com which has many articles about those subjects, and there I also encountered related material on intelligence explosion, which is this idea of a recursively self-improving artificial intelligence. And from there I read more on the subject, read a bunch of papers and articles and so on, and decided to apply to be a visiting fellow in April of 2011, or rather that’s when my visiting fellowship began, and then in September of 2011 I was hired as a researcher at the Singularity Institute, and then in November of 2011 I was made its Executive Director.

SB: So, just to clarify, is that Singularity the moment when there’s smarter than human intelligence that’s artificial?

LM: The word Singularity unfortunately has been used to mean many different things, so it is important to always clarify which meaning you are using. For our purposes you could call it the technological creation of greater than human intelligence. Other people, use it to mean something much broader and more vague, like the acceleration of technology beyond our ability to predict what will happen beyond the Singularity, or something vague like that.

SB: So what is the relationship between the artificial intelligence related question and the personal rationality related questions?

LM: Right, well the reason why the Singularity Institute has long had an interest in both rationality and safety mechanisms for artificial intelligence is that the stakes are very very high when we start thinking about artificial intelligence risks or catastrophic risks in general, and so we want our researchers to not make the kinds of cognitive mistakes that all researchers and all humans tend to make very often, which are these cognitive biases that are so well documented in psychology and behavioral economics. And so we think it’s very important for our researchers to be really world class in changing their minds in response to evidence and thinking through what the probability of different scenarios rather than going with which ones feel intuitive to us, and thinking clearly about which actions now will actually influence the future in positive ways rather than which actions will accrue status or prestige to ourselves, that sort of thing.

SB: You mentioned that some Internet entrepreneurs were involved in the starting of the organization. Who funds your organization and why do they do that?

LM: The largest single funder of the Singularity Institute is Peter Thiel, who cofounded PayPal and has been involved in several other ventures. His motivations are some concerned for existential risk, some enthusiasm for the work of our cofounder and senior researcher Eliezer Yudkowsky, and probably other reasons. Another large funder is Jaan Tallinn, the co-creator of Skype and Kazaa. He’s also concerned with existential risk and the rationality related work that we do. There are many other funders of Singularity Institute as well.

SB: Are there other organizations that do similar work?

LM: Yeah, the closest organization to what we do is the Future of Humanity Institute at Oxford University, in the United Kingdom. We collaborate with them very frequently. We go to each others’ conferences, we write papers together, and so on. The Future of Humanity Institute has a broader concern with cognitive enhancement and emerging technologies and existential risks in general, but for the past few years they have been focusing on machine superintelligence and so they’ve been working on the same issues that the Singularity Institute is devoted to. Another related organization is a new one called Global Catastrophic Risks Institute. We collaborate with them as well. And again, they are not solely focused on AI risks like the Singularity institute but on global catastrophic risks, and AI is one of them.

SB: You mentioned super human-intelligence quite a bit. Would you say that Google is a super-human intelligence?

LM: Well, yeah, so we have to be very careful about all the words that we are using of course. What I mean by intelligence is this notion of what sometimes is called optimization power, which is the ability to achieve ones goals in a wide range of environments and a wide range of constraints. And so for example, humans have a lot more optimization power than chimpanzees. That’s why even though we are slower than many animals and not as strong as many animals, we have this thing called intelligence that allows us to commence farming and science and build cities and put footprints on the moon. And so it is humans that are steering the future of the globe and not chimpanzees or stronger things like blue whales. So that’s kind of the intuitive notion. There are lots of technical papers that would be more precise. So when I am talking about super-human intelligence, I specifically mean an agent that is as good or better at humans at just about every skill set that humans possess for achieving their goals. So that would include things like not just mathematical ability or theorem proving and playing chess, but also things like social manipulation and composing music and so on, which are all functions of the brain not the kidneys.

SB: To clarify, you mentioned that humans are better than chimpanzees at achieving their goals. Do you mean humans collectively or individually? And likewise for chimpanzees.

LM: Maybe the median for chimpanzee versus the median human. There are lots of different ways that you could cash that out. I’m sure there are some humans in a vegetative state that are less effective at achieving their goals than some of the best chimpanzees.

SB: So, whatever this intelligence is, it must have goals?

LM: Yeah, well there are two ways of thinking about this. You can talk about it having a goal architecture that is explicitly written into its code that motivates its behavior. Or, that isn’t even necessary. As long as you can model its behavior as fulfilling some sort of utility function, you can describe its goals that way. In fact, that’s what we do with humans in fields like economics where you have a revealed preferences architecture. You measure a human’s preferences on a set of lotteries and from that you can extract a utility function that describes their goals. We haven’t done enough neuroscience to directly represent what humans goals are if they even have such a thing explicitly encoded in their brains.

SB: It’s interesting that you mentioned economics. So is like a corporation a kind of super-human intelligence?

LM: Um, you could model a corporation that way, except that it’s not clear that corporations are better at all humans at all different things. It would be a kind of weird corporation that was better than the best human or even the median human at all the things that humans do. Corporations aren’t usually the best in music and AI research and theory proving and stock markets and composing novels. And so there certainly are corporations that are better than median humans at certain things, like digging oil wells, but I don’t think there are corporations as good or better than humans at all things. More to the point, there is an interesting difference here because corporations are made of lots of humans and so they have the sorts of limitations on activities and intelligence that humans have. For example, they are not particularly rational in the sense defined by cognitive science. And the brains of the people that make up organizations are limited to the size of skulls, whereas you can have an AI that is the size of a warehouse. Those kinds of things.

SB: There’s a lot of industry buzz now around the term ‘big data’. I was wondering if there’s any connection between rationality or the Singularity and big data.

LM: Certainly. Big data is just another step. It provides opportunity for a lot of progress in artificial intelligence because very often it is easier to solve a problem by throwing some machine learning algorithms at a ton of data rather than trying to use your human skills for modeling a problem and coming up with an algorithm to solve it. So, big data is one of many things that, along with increased computational power, allows us to solve problems that we weren’t solving before, like machine translation or continuous speech synthesis and so on. If you give Google a trillion examples of translations from English to Chinese, then it can translate pretty well from English to Chinese without any of the programmers actually knowing Chinese.

SB: Does a super-intelligence need big data to be so super?

LM: Um, well… we don’t know because we haven’t built a super-human intelligence yet but I suspect that big data will in fact be used by the first super-human intelligences, just because big data came before super-human intelligences, it would make little sense for super-human intelligences to not avail themselves of the available techniques and resources. Such as big data. But also, such as more algorithmic insights like Bayes Nets. It would be sort of weird for a super-intelligence to not make use of the past century’s progress in probability theory.

SB: You mentioned before the transition from human control of the world to machine control of the world. How does the disproportionality of access to technology affect that if at all? For example, does the Singularity happen differently in rural India than it does in New York City?

LM: It depends a lot on what is sometimes called the ‘speed of takeoff’–whether we have a hard takeoff or a soft takeoff, or somewhere in between. To explain that, a soft takeoff would be a scenario in which you get human-level intelligence. That is, an AI that is about as good at the median human in doing the things that humans do, including composing music, doing AI research, etc. And then this breakthrough spreads quickly but still at a human time-scale as corporations replace their human workers with these human-level AI’s that are cheaper and more reliable and so on, and there is great economic and social upheaval, and the AI’s have some ability to improve their own intelligence but don’t get very far because of their own intelligence or the available computational resources and so there is a very slow transition from human control of the world to machines steering the future, where slow is on the order of years to decades.

Another possible scenario though is hard takeoff, which is once you have an AI that is better than humans at finding new insights in intelligence, it is able to improve its own intelligence roughly overnight, to find new algorithms that make it more intelligent just as we are doing now–humans are finding algorithms that make AI’s more intelligent. so now the AI is doing this, and now it has even more intelligence at its disposal to discover breakthroughs in inteligence, and then it has EVEN MORE intelligence with which to discover new breakthroughs in intelligence, and because it’s not being limited by having slow humans in the development loop, it sort of goes from roughly human levels of intelligence to vastly superhuman levels of intelligence in a matter of hours or weeks or months. And then you’ve got a machine that can engage in a global coordinated campaign to achieve its goals and neutralize the human threat to its goals in a way that happens very quickly instead of over years or decades. I don’t know which scenario will play out, so it’s hard to predict how that will go.

SB: It seems like there may be other factors besides the nature of intelligence in play. It seems like to wage a war against all humans, a hard takeoff intelligence, if I’m using the words correctly, would have to have a lot of resources available to it beyond just its intelligence.

LM: That’s right. So, that contributes to our uncertainty about how things play out. For example, does one of the first self-improving human-level artificial intelligences have access to the Internet? Or have people taken enough safety precautions that they keep it “in a box”, as they say. Then the question would be: how good is a super-human AI at manipulating its prison guards so that it can escape the box and get onto the Internet? The weakest point, hackers always know…the quickest way to get into a system is to hack the humans, because humans are stupid. So, there’s that question.

Then there’s questions like: if it gets onto the Internet, how much computing power is there available? Is there enough cheap computing power available for it to hack through a few firewalls and make a billion copies of itself overnight? Or is the computing power required for a super-human intelligence a significant fraction of the computing power available in the world, so that it can only make a few copies of itself. Another question is: what sort of resources are available for converting digital intelligence into physical actions in the human world. For example, right now you can order chemicals from a variety of labs and maybe use a bunch of emails and phone calls to intimidate a particular scientist into putting those chemicals together into a new supervirus or something, but that’s just one scenario and whenever you describe a detailed scenario like that, that particular scenario is almost certainly false and not going to happen, but there are things like that, lots of ways for digital intelligence to be converted to physical action in the world. But how many opportunities are there for that, decades from now, it’s hard to say.

SB: How do you anticipate this intelligence interacting with the social and political institutions around the Internet, supposing it gets to the Internet?

LM: Um, yeah, that’s the sort of situation where one would be tempted to start telling detailed stories about what would happen, but any detailed story would almost certainly be false. It’s really hard to say. I sort of don’t think that a super-human intelligence…if we got to a vastly smarter than human intelligence, it seems like it would probably be an extremely inefficient way for it to acheive its goals by way of causing Congress to pass a new bill somehow…that is an extremely slow and uncertain…much easier just to invent new technologies and threaten humans militarily, that sort of thing.

SB: So do you think that machine control of the world is an inevitability?

LM: Close to it. Humans are not even close to the most intelligent kind of creature you can have. They are more close to the dumbest creature you can have while also having technological civilization. If you could have a dumber creature with a technological civilization then we would be having this conversation at that level. So it looks like you can have agents that are vastly more capable of achieving their goals in the world than humans are, and there don’t seem to be any in principle barriers to doing that in machines. The usual objections that are raised like, “Will machines have intentionality?” or “Will machines have consciousness?” don’t actually matter for the question of whether they will have intelligent behavior. You don’t need intentionality or consciousness to be as good at humans at playing chess or driving cars and there’s no reason for thinking we need those things for any of the other things that we like to do. So the main factor motivating this progress is the extreme economic and military advantages to having an artificial intelligence, which will push people to develop incrementally improved systems on the way to full-blown AI. So it looks like we will get there eventually. And then it would be pretty weird situation in which you had agents that were vastly smarter than humans but that somehow humans were keeping them in cages or keeping them controlled. If we had chimpanzees running the world and humans in cages, humans would be smart enough to figure out how to break out of cages designed by chimpanzees and take over the world themselves.

SB: We are close to running out of time. There are couple more questions on my mind. One is: I think I understand that intelligence is being understood in terms of optimization power, but also that for this intelligence to count it has to be better at all things than humans….

LM: Or some large fraction of them. I’m still happy to define super-human intelligence with regard to all things that humans do, but of course for taking over the world it’s not clear that you need to be able to write novels well.

SB: Ok, so the primary sorts of goals that you are concerned about are the kinds of goals that are involved in taking over the world or are instrumental to it?

LM: Well, that’s right. And unfortunately, taking over the world is a very good idea for just about any goal that you have. Even if your goal is to maximize Exxon Mobil profits or manufacture the maximal number of paper clips or travel to a distant star, it’s a very good idea to take over the world first if you can because then you can use all available resources towards achieving your goal to the max. And also, any intelligent AI would correctly recognize that humans are the greatest threat to it achieving its goals because we will get skittish and worried about what its doing and try to shut it off. And AI will of course recognize that thats true and if it is at all intelligent will first seek to neutralize the human threat to it achieving its goals.

SB: What about intelligences that sort of use humans effectively? I’m thinking of an intelligence that was on the Internet. The Internet requires all these human actions for it to be what it is. So why would it make sense for an intelligence whose base of power was the Internet to kill all humans?

LM: Is the scenario you are imagining a kind of scenario where the AI can achieve its goals better with humans rather than neutralizing humans first? Is that what you’re asking?

SB: Yeah, I suppose.

LM: The issue is that unless you define the goals very precisely in terms of keeping humans around or benefiting humans, remember that an AI is capable of doing just about anything that humans can do and so there aren’t really things that it would need humans before unless the goal structure were specifically defined in terms of benefitting biological humans. And that’s extremely difficult to do. For example, if you found a precise way to specify “maximize human pleasure” or welfare or something, it might just mean that the AI just plugs us all into heroin drips and we never do anything cool. So it’s extremely different to specify in math–because AI’s are made of math–what it is that humans want. That gets back to the point I was making at the beginning about the complexity and fragility of human values. It turns out we don’t just value pleasure; we have this large complex of values and indeed different humans have different values from each other. So the problem of AI sort of makes an honest problem of longstanding issues in moral philosophy and value theory and so on.

SB: Ok, one last question, which is: suppose AI is taking off, and we notice that it’s taking off, and the collective intelligence of humanity working together is pitted against this artificial intelligence. Say this happens tomorrow. Who wins?

LM: Well, I mean it depends on so many unknown factors. It may be that if the intelligence is sufficiently constrained and can only improve its intelligence at a slow rate, we might actually notice that one of them is taking off and be able to pull the plug and shut it down soon enough. But that puts us in a very vulnerable state, because if one group has an AI that is capable of taking off, it probably means that other groups are only weeks or months or years or possibly decades behind. And will the correct safety precautions be taken the second, third, and twenty-fifth time?

I thank Luke Meuhlhauser for making the time for this interview. I hope to post my reflections on this at a later date.