I’ve been greatly enjoying Fred Turner’s The Democratic Surround partly because it cuts through a lot of ideological baggage with smart historical detail. It marks a turn, perhaps, in what intellectuals talk about. The critical left has been hung up on neoliberalism for decades while the actual institutions that are worth criticizing have moved on. It’s nice to see a new name for what’s happening. That new name is managerialism.
Managerialism is a way to talk about what Facebook and the Democratic Party and everybody else providing a highly computationally tuned menu of options is doing without making the mistake of using old metaphors of control to talk about a new thing.
Turner is ambivalent about managerialism perhaps because he’s at Stanford and so occupies an interesting position in the grand intellectual matrix. He’s read his Foucault, he explains when he speaks in public, though he is sometimes criticized for not being critical enough. I think ‘critical’ intellectuals may find him confusing because he’s not deploying the same ‘critical’ tropes that have been used since Adorno even though he’s writing sometimes about Adorno. He is optimistic, or at least writes optimistically about the past, or at least writes about the past in a way that isn’t overtly scathing which is just more upbeat than a lot of writing nowadays.
Managerialism is, roughly, the idea of technocratically bounded space of complex interactive freedom as a principle of governance or social organization. In The Democratic Surround, he is providing a historical analysis of a Bauhaus-initiated multimedia curation format, the ‘surround’, to represent managerialist democracy in the same way Foucault provided a historical analysis of the Panopticon to represent surveillance. He is attempting to implant a new symbol into the vocabulary of political and social thinkers that we can use to understand the world around us while giving it a rich and subtle history that expands our sense of its possibilities.
I’m about halfway through the book. I love it. If I have a criticism of it it’s that everything in it is a managerialist surround and sometimes his arguments seems a bit stretched. For example, here’s his description of how John Cage’s famous 4’33” is a managerialist surround:
With 4’33”, as with Theater Piece #1, Cage freed sounds, performers, and audiences alike from the tyrannical wills of musical dictators. All tensions–between composer, performer, and audience; between sound and music; between the West and the East–had dissolved. Even as he turned away from what he saw as more authoritarian modes of composition and performance, though, Cage did not relinquish all control of the situation. Rather, he acted as an aesthetic expert, issuing instructions that set the parameters for action. Even as he declined the dictator’s baton, Cage took up a version of the manager’s spreadsheet and memo. Thanks to his benevolent instructions, listeners and music makers alike became free to hear the world as it was and to know themselves in that moment. Sounds and people became unified in their diversity, free to act as they liked, within a distinctly American musical universe–a universe finally freed of dictators, but not without order.
I have two weaknesses as a reader. One is a soft spot for wicked vitriol. Another is an intolerance of rhetorical flourish. The above paragraph is rhetorical flourish that doesn’t make sense. Saying that 4’33” is a manager’s spreadsheet is just about the most nonsensical metaphor I could imagine. In a universe with only fascists and managerialists, then I guess 4’33” is more like a memo. But there are so many more apt musical metaphors for unification in diversity in music. For example, a blues or jazz band playing a standard. Literally any improvisational musical form. No less quintessentially American.
If you bear with me and agree that this particular point is poorly argued and that John Cage wasn’t actually a managerialist and was in fact the Zen spiritualist that he claimed to be in his essays, then either Turner is equating managerialism with Zen spiritualism or Turner is trying to make Cage a symbol of managerialism for his own ideological ends.
Either of these is plausible. Steve Jobs was an I Ching enthusiast like Cage. Stewart Brand, the subject of Turner’s last book, From Counterculture to Cyberculture, was a back-to-land commune enthusiast before he become a capitalist digerati hero. Running through Turner’s work is the demonstration of the cool origins of today’s world that’s run by managerialist power. We are where we are today because democracy won against fascism. We are where we are today because hippies won against whoever. Sort of. Turner is also frank about capitalist recuperation of everything cool. But this is not so bad. Startups are basically like co-ops–worker owned until the VC’s get too involved.
I’m a tech guy, sort of. It’s easy for me to read my own ambivalence about the world we’re in today into Turner’s book. I’m cool, right? I like interesting music and read books on intellectual history and am tolerant of people despite my connections to power, right? Managers aren’t so bad. I’ve been a manager. They are necessary. Sometimes they are benevolent and loved. That’s not bad, right? Maybe everything is just fine because we have a mode of social organization that just makes more sense now than what we had before. It’s a nice happy medium between fascism, communism, anarchism, and all the other extreme -ism’s that plagued the 20th century with war. People used to starve to death or kill each other en masse. Now they complain about bad management or, more likely, bad customer service. They complain as if the bad managers are likely to commit a war crime at any minute but that’s because their complaints would sound so petty and trivial if they were voiced without the use of tropes that let us associate poor customer service with deliberate mind-control propaganda or industrial wage slavery. We’ve forgotten how to complain in a way that isn’t hyperbolic.
Maybe it’s the hyperbole that’s the real issue. Maybe a managerialist world lacks catastrophe and so is so frickin’ boring that we just don’t have the kinds of social crises that a generation of intellectuals trained in social criticism have been prepared for. Maybe we talk about how things are “totally awesome!” and totally bad because nothing really is that good or that bad and so our field of attention has contracted to the minute, amplifying even the faintest signal into something significant. Case in point, Alex from Target. Under well-tuned managerialism, the only thing worth getting worked up about is that people are worked up about something. Even if it’s nothing. That’s the news!
So if there’s a critique of managerialism, it’s that it renders the managed stupid. This is a problem.
Nick Bostrom will give a book talk on campus soon. My departmental seminar on “Algorithms as Computation and Culture” has opened with a paper on the ethics of algorithms and a paper on accumulated practical wisdom regarding machine learning. Of course, these are related subjects.
Jenna Burrell recently trolled me in order to get me to give up my own opinions on the matter, which are rooted in a philosophical functionalism. I’ve learned just now that these opinions may depend on obsolete philosophy of mind. I’m not sure. R. Scott Bakker’s blog post against pragmatic functionalism makes me wonder: what do I believe again? I’ve been resting on a position established when I was deeper into this stuff seven years ago. A lot has happened since then.
I’m turning into a historicist perhaps due to lack of imagination or simply because older works are more accessible. Cybernetic theories of control–or, electrical engineering theories of control–are as relevant, it seems, to contemporary debates as machine learning, which to the extent it depends on stochastic gradient descent is just another version of cybernetic control anyway, right?
Ashwin Parameswaran’s blog post about Benigner’s Control Revolution illustrates this point well. To a first approximation, we are simply undergoing the continuation of prophecies of the 20th century, only more thoroughly. Over and over, and over, and over, and over, like a monkey with a miniature cymbal.
One property of a persistent super-intelligent infrastructure of control would be our inability to comprehend it. Our cognitive models, constructed over the course of a single lifetime with constraints on memory both in time and space, limited to a particular hypothesis space, could simply be outgunned by the complexity of the sociotechnical system in which it is embedded. I tried to get at this problem with work on computational asymmetry but didn’t find the right audience. I just learned there’s been work on this in finance which makes sense, as it’s where it’s most directly relevant today.
I love my Mom. One reason I love her is that she is so good at asking questions.
I thought I was on vacation today, but then my Mom started to ask me questions about my dissertation. What is my dissertation about? Why is it interesting?
I tried to explain: I’m interested in studying how these people working on scientific software work together. That could be useful in the design of new research infrastructure.
M: Ok, so like…GitHub? Is that something people use to share their research? How do they find each other using that?
S: Well, people can follow each others repositories to get notifications. Or they can meet each other at conferences and learn what people are working on. Sometimes people use social media to talk about what they are doing.
M: That sounds like a lot of different ways of learning about things. Could your research be about how to get them all to talk about it in one place?
S: Yes, maybe. In some ways GitHub is already serving as that central repository these days. One application of my research could be about how to design, say, an extension to GitHub that connects people. There’s a lot of research on ‘link formation’ in the social media context–well I’m your friend, and you have this other friend, so maybe we should be friends. Maybe the story is different for collaborators. I have certain interests, and somebody else does too. When are our interests aligned, so that we’d really want to work together on the same thing? And how do we resolve disputes when our interests diverge?
M: That sounds like what open source is all about.
S: Yeah!
M: Could you build something like that that wasn’t just for software? Say I’m a researcher and I’m interesting in studying children’s education, and there’s another researcher who is interested in studying children’s education. Could you build something like that in your…your D-Lab?
S: We’ve actually talked about building an OKCupid for academic research! The trick there would be bringing together researchers interested in different things, but with different skills. Maybe somebody is really good at analyzing data, and somebody else is really good at collecting data. But it’s a lot of work to build something nice. Not as easy as “build it and they will come.”
M: But if it was something like what people are used to using, like OKCupid, then…
S: It’s true that would be a really interesting project. But it’s not exactly my research interest. I’m trying really hard to be a scientist. That means working on problems that aren’t immediately appreciable by a lot of people. There are a lot of applications of what I’m trying to do, but I won’t really know what they are until I get the answers to what I’m looking for.
M: What are you looking for?
S: I guess, well…I’m looking for a mathematical model of creativity.
M: What? Wow! And you think you’re going to find that in your data?
S: I’m going to try. But I’m afraid to say that. People are going to say, “Why aren’t you studying artists?”
M: Well, the people you are studying are doing creative work. They’re developing software, they’re scientists…
S: Yes.
M: But they aren’t like Beethoven writing a symphony, it’s like…
S: …a craft.
M: Yes, a craft. But also, it’s a lot of people working together. It’s collective creativity.
S: Yes, that’s right.
M: You really should write that down. A mathematical model of collective creativity! That gives me chills. I really hope you’ll write that down.
Thanks, Mom.
In my PhD program, I’ve recently finished my coursework and am meant to start focusing on research for my dissertation. Maybe because of the hubbub around open access research, maybe because I still see myself as a ‘hacker’, maybe because it’s somehow recursively tied into my research agenda, or because I’m an open source dogmatic, I’ve been fantasizing about the tools and technology of publication that I want to work on my dissertation with.
For this project, which I call the Dissertron, I’ve got a loose bundle of requirements feature creeping its way into outer space:
This is a lot, and arguably just a huge distraction from working on my dissertation. However, it seems like this or something like it is a necessary next step in the advance of science and I don’t see how I really have much choice in the matter.
Unfortunately, I’m traveling, so I’m going to miss the PLOS workshop on Markdown for Science tomorrow. That’s really too bad, because Scholarly Markdown would get me maybe 50% of the way to what I want.
Right now the best tool chain I can imagine for this involves Scholarly Markdown, run using Pandoc, which I just now figured out is developed by a philosophy professor at Berkeley. Backing it by a Git repository would allow for incremental changes and version control.
Static site generation and hosting is a bit trickier. I feel like GitHub’s support of Jekyll make it a compelling choice, but hacking it to make it fit into the academic frame I’m thinking in might be more trouble than its worth. While it’s a bit of an oversimplification to say this, my impression is that at my university at least there is a growing movement to adopt Python as the programming language of choice for scientific computing. The exceptions seem to be people in the Computer Science department that are backing Scala.
(I like both languages and so can’t complain, except that it makes it harder to do interdisciplinary research if there is a technical barrier in their toolsets. As more of scientific research becomes automated, it is bound to get more crucial that scientific processes (broadly speaking) inter-operate. I’m incidentally excited to be working on these problems this summer for Berkeley’s new Social Science Data Lab. A lot of interesting architectural design is being masterminded by Aaron Culich, who manages the EECS department’s computing infrastructure. I’ve been meaning to blog about our last meeting for a while…but I digress)
Problem is, neither Python or Scala is Ruby, and Ruby is currently leading the game (in my estimate, somebody tell me if I’m wrong) in flexible and sexy smooth usable web design. And then there’s JavaScript, improbably leaking into the back end of the software stack after overflowing the client side.
So for the aspiring open access indie web hipster hacker science self-publisher, it’s hard to navigate the technical terrain. I’m tempted to string together my own rig depending mostly on Pandoc, but even that’s written in Haskell.
These implementation-level problems suggest that the problem needs to be pushed up a level of abstraction to the question of API and syntax standards around scientific web publishing. Scholarly Markdown can be a standard, hopefully with multiple implementations. Maybe there needs to be a standard around web citations as well (since in an open access world, we don’t need the same level of indirection between a document and the works it cites. Like blog posts, web publications can link to the content it derives from directly.)
Information transfer just is the coming-into-dependence of two variables, which under the many worlds interpretation of quantum mechanics means the entanglement of the “worlds” of each variable (and, by extension, the networks of causally related variables of which they are a part). Information exchange collapses possibilities.
This holds up whether you take a subjectivist view of reality (and probability–Bayesian probability properly speaking) or an objectivist view. At their (dialectical?) limit, the two “irreconcilable” paradigms converge on a monist metaphysics that is absolutely physical and also ideal. (This was recognized by Hegel, who was way ahead of the game in a lot of ways.) It is the ideality of nature that allows it to be mathematized, though its important to note that mathematization does not exclude engagement with nature through other modalities, e.g. the emotional, the narrative, etc.
This means that characterizing the evolution of networks of information exchange by their physical properties (limits of information capacity of channels, etc.) is something to be embraced to better understand their impact on e.g. socially constructed reality, emic identity construction, etc. What the mathematics provide is a representation of what remains after so many diverse worlds are collapsed.
A similar result, representing a broad consensus, might be attained dialectically, specifically through actual dialog. Whereas the mathematical accounting is likely to lead to reduction to latent variables that may not coincide with the lived experience of participants, a dialectical approach is more likely to result in a synthesis of perspectives at a higher level of abstraction. (Only a confrontation with nature as the embodiment of unconscious constraints is likely to force us to confront latent mechanisms.)
Whether or not such dialectical synthesis will result in a singular convergent truth is unknown, with various ideologies taking positions on the matter as methodological assumptions. Haraway’s feminist epistemology, eschewing rational consensus in favor of interperspectival translation, rejects a convergent (scientific, and she would say masculine) truth. But does this stand up to the simple objection that Haraway’s own claims about truth and method transcend individual perspective, making he guilty of performative contradiction?
Perhaps a deeper problem with the consensus view of truth, which I heard once from David Weinberger, is that the structure of debate may have fractal complexity. The fractal pluralectic can fray into infinite and infinitesimal disagreement at its borders. I’ve come around to agreeing with this view, uncomfortable as it is. However, within the fractal pluralectic we can still locate a convergent perspective based on the network topology of information flow. Some parts of the network are more central and brighter than others.
A critical question is to what extent the darkness and confusion in the dissonant periphery can be included within the perspective of the central, convergent parts of the network. Is there necessarily a Shadow? Without the noise, can there be a signal?
Like many people, I first learned about the idea of the technological Singularity while randomly surfing the internet. It was around the year 2000. I googled “What is the meaning of life?” and found an article explaining that at the rate that artificial intelligence was progressing, we would reach a kind of computational apotheosis within fifty years. I guess at the time I thought that Google hadn’t done a bad job at answering that one, all things considered.
Since then, the Singularity’s been in the back of my mind as one of many interesting but perhaps crackpot theories of how things are going to go for us as a human race. People in my circles would dismiss it as “eschatology for nerds,” and then get back to playing Minecraft.
Since then I’ve moved to Berkeley, California, which turns out to be a hub of Singularity research. I’ve met many very smart people who are invested in reasoning about and predicting the Singularity. Though I don’t agree with all of that thinking, this exposure has given me more respect for it.
I have also learned from academic colleagues a new way to dismiss Singulatarians as “the Tea Party of the Information Society.” This piece by Evgeny Morozov is in line with this view of Singulatarianism as a kind of folk ideology used by dot-com elites to reinforce political power. (My thoughts on that piece are here.)
From where I’m standing, Singulatarianism is a controversial and a politically important world view that deserves honest intellectual scrutiny. In October, I asked Luke Muehlhauser, the Executive Director of the Singularity Institute, if I could interview him. I wanted to get a better sense of what the Singularity Institute was about and get material that could demystify Singularitarianism for others. He graciously accepted. Below is the transcript. I’ve added links where I’ve thought appropriate.
SB: Can you briefly describe the Singularity Institute?
LM: The Singularity Institute is a 501(c)(3) charity founded in the year 2000 by Eliezer Yudkowsky and some Internet entrepreneurs who supported his work for a couple years. The mission of the institute is to ensure the the creation of smarter-than-human intelligence benefits society, and the central problem we think has to do with the fact that very advanced AI’s, number one, by default will do things that humans don’t really like because humans have very complicated goals so almost all possible goals you can give an AI would be restructuring the world according to goals that are different than human goals, and then number two, the transition from human control of the planet to machine control of the planet may be very rapid because once you get an AI that is better than humans are at designing AI’s and doing AI research, then it will be able to improve its own intelligence in a loop of recursive self-improvement, and very quickly go from roughly human levels of intelligence to vastly superhuman levels of intelligence with lots of power to restructure the world according to its preferences.
SB: How did you personally get involved and what’s your role in it?
LM: I personally became involved because I was interested in the cognitive science of rationality, of changing ones mind successfully in response to evidence, and of choosing actions that are actually aimed towards achieving ones goals. Because of my interest in the subject matter I was reading the website LessWrong.com which has many articles about those subjects, and there I also encountered related material on intelligence explosion, which is this idea of a recursively self-improving artificial intelligence. And from there I read more on the subject, read a bunch of papers and articles and so on, and decided to apply to be a visiting fellow in April of 2011, or rather that’s when my visiting fellowship began, and then in September of 2011 I was hired as a researcher at the Singularity Institute, and then in November of 2011 I was made its Executive Director.
SB: So, just to clarify, is that Singularity the moment when there’s smarter than human intelligence that’s artificial?
LM: The word Singularity unfortunately has been used to mean many different things, so it is important to always clarify which meaning you are using. For our purposes you could call it the technological creation of greater than human intelligence. Other people, use it to mean something much broader and more vague, like the acceleration of technology beyond our ability to predict what will happen beyond the Singularity, or something vague like that.
SB: So what is the relationship between the artificial intelligence related question and the personal rationality related questions?
LM: Right, well the reason why the Singularity Institute has long had an interest in both rationality and safety mechanisms for artificial intelligence is that the stakes are very very high when we start thinking about artificial intelligence risks or catastrophic risks in general, and so we want our researchers to not make the kinds of cognitive mistakes that all researchers and all humans tend to make very often, which are these cognitive biases that are so well documented in psychology and behavioral economics. And so we think it’s very important for our researchers to be really world class in changing their minds in response to evidence and thinking through what the probability of different scenarios rather than going with which ones feel intuitive to us, and thinking clearly about which actions now will actually influence the future in positive ways rather than which actions will accrue status or prestige to ourselves, that sort of thing.
SB: You mentioned that some Internet entrepreneurs were involved in the starting of the organization. Who funds your organization and why do they do that?
LM: The largest single funder of the Singularity Institute is Peter Thiel, who cofounded PayPal and has been involved in several other ventures. His motivations are some concerned for existential risk, some enthusiasm for the work of our cofounder and senior researcher Eliezer Yudkowsky, and probably other reasons. Another large funder is Jaan Tallinn, the co-creator of Skype and Kazaa. He’s also concerned with existential risk and the rationality related work that we do. There are many other funders of Singularity Institute as well.
SB: Are there other organizations that do similar work?
LM: Yeah, the closest organization to what we do is the Future of Humanity Institute at Oxford University, in the United Kingdom. We collaborate with them very frequently. We go to each others’ conferences, we write papers together, and so on. The Future of Humanity Institute has a broader concern with cognitive enhancement and emerging technologies and existential risks in general, but for the past few years they have been focusing on machine superintelligence and so they’ve been working on the same issues that the Singularity Institute is devoted to. Another related organization is a new one called Global Catastrophic Risks Institute. We collaborate with them as well. And again, they are not solely focused on AI risks like the Singularity institute but on global catastrophic risks, and AI is one of them.
SB: You mentioned super human-intelligence quite a bit. Would you say that Google is a super-human intelligence?
LM: Well, yeah, so we have to be very careful about all the words that we are using of course. What I mean by intelligence is this notion of what sometimes is called optimization power, which is the ability to achieve ones goals in a wide range of environments and a wide range of constraints. And so for example, humans have a lot more optimization power than chimpanzees. That’s why even though we are slower than many animals and not as strong as many animals, we have this thing called intelligence that allows us to commence farming and science and build cities and put footprints on the moon. And so it is humans that are steering the future of the globe and not chimpanzees or stronger things like blue whales. So that’s kind of the intuitive notion. There are lots of technical papers that would be more precise. So when I am talking about super-human intelligence, I specifically mean an agent that is as good or better at humans at just about every skill set that humans possess for achieving their goals. So that would include things like not just mathematical ability or theorem proving and playing chess, but also things like social manipulation and composing music and so on, which are all functions of the brain not the kidneys.
SB: To clarify, you mentioned that humans are better than chimpanzees at achieving their goals. Do you mean humans collectively or individually? And likewise for chimpanzees.
LM: Maybe the median for chimpanzee versus the median human. There are lots of different ways that you could cash that out. I’m sure there are some humans in a vegetative state that are less effective at achieving their goals than some of the best chimpanzees.
SB: So, whatever this intelligence is, it must have goals?
LM: Yeah, well there are two ways of thinking about this. You can talk about it having a goal architecture that is explicitly written into its code that motivates its behavior. Or, that isn’t even necessary. As long as you can model its behavior as fulfilling some sort of utility function, you can describe its goals that way. In fact, that’s what we do with humans in fields like economics where you have a revealed preferences architecture. You measure a human’s preferences on a set of lotteries and from that you can extract a utility function that describes their goals. We haven’t done enough neuroscience to directly represent what humans goals are if they even have such a thing explicitly encoded in their brains.
SB: It’s interesting that you mentioned economics. So is like a corporation a kind of super-human intelligence?
LM: Um, you could model a corporation that way, except that it’s not clear that corporations are better at all humans at all different things. It would be a kind of weird corporation that was better than the best human or even the median human at all the things that humans do. Corporations aren’t usually the best in music and AI research and theory proving and stock markets and composing novels. And so there certainly are corporations that are better than median humans at certain things, like digging oil wells, but I don’t think there are corporations as good or better than humans at all things. More to the point, there is an interesting difference here because corporations are made of lots of humans and so they have the sorts of limitations on activities and intelligence that humans have. For example, they are not particularly rational in the sense defined by cognitive science. And the brains of the people that make up organizations are limited to the size of skulls, whereas you can have an AI that is the size of a warehouse. Those kinds of things.
SB: There’s a lot of industry buzz now around the term ‘big data’. I was wondering if there’s any connection between rationality or the Singularity and big data.
LM: Certainly. Big data is just another step. It provides opportunity for a lot of progress in artificial intelligence because very often it is easier to solve a problem by throwing some machine learning algorithms at a ton of data rather than trying to use your human skills for modeling a problem and coming up with an algorithm to solve it. So, big data is one of many things that, along with increased computational power, allows us to solve problems that we weren’t solving before, like machine translation or continuous speech synthesis and so on. If you give Google a trillion examples of translations from English to Chinese, then it can translate pretty well from English to Chinese without any of the programmers actually knowing Chinese.
SB: Does a super-intelligence need big data to be so super?
LM: Um, well… we don’t know because we haven’t built a super-human intelligence yet but I suspect that big data will in fact be used by the first super-human intelligences, just because big data came before super-human intelligences, it would make little sense for super-human intelligences to not avail themselves of the available techniques and resources. Such as big data. But also, such as more algorithmic insights like Bayes Nets. It would be sort of weird for a super-intelligence to not make use of the past century’s progress in probability theory.
SB: You mentioned before the transition from human control of the world to machine control of the world. How does the disproportionality of access to technology affect that if at all? For example, does the Singularity happen differently in rural India than it does in New York City?
LM: It depends a lot on what is sometimes called the ‘speed of takeoff’–whether we have a hard takeoff or a soft takeoff, or somewhere in between. To explain that, a soft takeoff would be a scenario in which you get human-level intelligence. That is, an AI that is about as good at the median human in doing the things that humans do, including composing music, doing AI research, etc. And then this breakthrough spreads quickly but still at a human time-scale as corporations replace their human workers with these human-level AI’s that are cheaper and more reliable and so on, and there is great economic and social upheaval, and the AI’s have some ability to improve their own intelligence but don’t get very far because of their own intelligence or the available computational resources and so there is a very slow transition from human control of the world to machines steering the future, where slow is on the order of years to decades.
Another possible scenario though is hard takeoff, which is once you have an AI that is better than humans at finding new insights in intelligence, it is able to improve its own intelligence roughly overnight, to find new algorithms that make it more intelligent just as we are doing now–humans are finding algorithms that make AI’s more intelligent. so now the AI is doing this, and now it has even more intelligence at its disposal to discover breakthroughs in inteligence, and then it has EVEN MORE intelligence with which to discover new breakthroughs in intelligence, and because it’s not being limited by having slow humans in the development loop, it sort of goes from roughly human levels of intelligence to vastly superhuman levels of intelligence in a matter of hours or weeks or months. And then you’ve got a machine that can engage in a global coordinated campaign to achieve its goals and neutralize the human threat to its goals in a way that happens very quickly instead of over years or decades. I don’t know which scenario will play out, so it’s hard to predict how that will go.
SB: It seems like there may be other factors besides the nature of intelligence in play. It seems like to wage a war against all humans, a hard takeoff intelligence, if I’m using the words correctly, would have to have a lot of resources available to it beyond just its intelligence.
LM: That’s right. So, that contributes to our uncertainty about how things play out. For example, does one of the first self-improving human-level artificial intelligences have access to the Internet? Or have people taken enough safety precautions that they keep it “in a box”, as they say. Then the question would be: how good is a super-human AI at manipulating its prison guards so that it can escape the box and get onto the Internet? The weakest point, hackers always know…the quickest way to get into a system is to hack the humans, because humans are stupid. So, there’s that question.
Then there’s questions like: if it gets onto the Internet, how much computing power is there available? Is there enough cheap computing power available for it to hack through a few firewalls and make a billion copies of itself overnight? Or is the computing power required for a super-human intelligence a significant fraction of the computing power available in the world, so that it can only make a few copies of itself. Another question is: what sort of resources are available for converting digital intelligence into physical actions in the human world. For example, right now you can order chemicals from a variety of labs and maybe use a bunch of emails and phone calls to intimidate a particular scientist into putting those chemicals together into a new supervirus or something, but that’s just one scenario and whenever you describe a detailed scenario like that, that particular scenario is almost certainly false and not going to happen, but there are things like that, lots of ways for digital intelligence to be converted to physical action in the world. But how many opportunities are there for that, decades from now, it’s hard to say.
SB: How do you anticipate this intelligence interacting with the social and political institutions around the Internet, supposing it gets to the Internet?
LM: Um, yeah, that’s the sort of situation where one would be tempted to start telling detailed stories about what would happen, but any detailed story would almost certainly be false. It’s really hard to say. I sort of don’t think that a super-human intelligence…if we got to a vastly smarter than human intelligence, it seems like it would probably be an extremely inefficient way for it to acheive its goals by way of causing Congress to pass a new bill somehow…that is an extremely slow and uncertain…much easier just to invent new technologies and threaten humans militarily, that sort of thing.
SB: So do you think that machine control of the world is an inevitability?
LM: Close to it. Humans are not even close to the most intelligent kind of creature you can have. They are more close to the dumbest creature you can have while also having technological civilization. If you could have a dumber creature with a technological civilization then we would be having this conversation at that level. So it looks like you can have agents that are vastly more capable of achieving their goals in the world than humans are, and there don’t seem to be any in principle barriers to doing that in machines. The usual objections that are raised like, “Will machines have intentionality?” or “Will machines have consciousness?” don’t actually matter for the question of whether they will have intelligent behavior. You don’t need intentionality or consciousness to be as good at humans at playing chess or driving cars and there’s no reason for thinking we need those things for any of the other things that we like to do. So the main factor motivating this progress is the extreme economic and military advantages to having an artificial intelligence, which will push people to develop incrementally improved systems on the way to full-blown AI. So it looks like we will get there eventually. And then it would be pretty weird situation in which you had agents that were vastly smarter than humans but that somehow humans were keeping them in cages or keeping them controlled. If we had chimpanzees running the world and humans in cages, humans would be smart enough to figure out how to break out of cages designed by chimpanzees and take over the world themselves.
SB: We are close to running out of time. There are couple more questions on my mind. One is: I think I understand that intelligence is being understood in terms of optimization power, but also that for this intelligence to count it has to be better at all things than humans….
LM: Or some large fraction of them. I’m still happy to define super-human intelligence with regard to all things that humans do, but of course for taking over the world it’s not clear that you need to be able to write novels well.
SB: Ok, so the primary sorts of goals that you are concerned about are the kinds of goals that are involved in taking over the world or are instrumental to it?
LM: Well, that’s right. And unfortunately, taking over the world is a very good idea for just about any goal that you have. Even if your goal is to maximize Exxon Mobil profits or manufacture the maximal number of paper clips or travel to a distant star, it’s a very good idea to take over the world first if you can because then you can use all available resources towards achieving your goal to the max. And also, any intelligent AI would correctly recognize that humans are the greatest threat to it achieving its goals because we will get skittish and worried about what its doing and try to shut it off. And AI will of course recognize that thats true and if it is at all intelligent will first seek to neutralize the human threat to it achieving its goals.
SB: What about intelligences that sort of use humans effectively? I’m thinking of an intelligence that was on the Internet. The Internet requires all these human actions for it to be what it is. So why would it make sense for an intelligence whose base of power was the Internet to kill all humans?
LM: Is the scenario you are imagining a kind of scenario where the AI can achieve its goals better with humans rather than neutralizing humans first? Is that what you’re asking?
SB: Yeah, I suppose.
LM: The issue is that unless you define the goals very precisely in terms of keeping humans around or benefiting humans, remember that an AI is capable of doing just about anything that humans can do and so there aren’t really things that it would need humans before unless the goal structure were specifically defined in terms of benefitting biological humans. And that’s extremely difficult to do. For example, if you found a precise way to specify “maximize human pleasure” or welfare or something, it might just mean that the AI just plugs us all into heroin drips and we never do anything cool. So it’s extremely different to specify in math–because AI’s are made of math–what it is that humans want. That gets back to the point I was making at the beginning about the complexity and fragility of human values. It turns out we don’t just value pleasure; we have this large complex of values and indeed different humans have different values from each other. So the problem of AI sort of makes an honest problem of longstanding issues in moral philosophy and value theory and so on.
SB: Ok, one last question, which is: suppose AI is taking off, and we notice that it’s taking off, and the collective intelligence of humanity working together is pitted against this artificial intelligence. Say this happens tomorrow. Who wins?
LM: Well, I mean it depends on so many unknown factors. It may be that if the intelligence is sufficiently constrained and can only improve its intelligence at a slow rate, we might actually notice that one of them is taking off and be able to pull the plug and shut it down soon enough. But that puts us in a very vulnerable state, because if one group has an AI that is capable of taking off, it probably means that other groups are only weeks or months or years or possibly decades behind. And will the correct safety precautions be taken the second, third, and twenty-fifth time?
I thank Luke Meuhlhauser for making the time for this interview. I hope to post my reflections on this at a later date.
I was worried when I wrote this that I was exaggerating the phenomenon of literati denouncing technical progress. Then I happened upon this post by a pseudonymous Mr. Teacup, which echoes themes from Morozov’s review.
(At a company Christmas party, we exchanged Secret Santa gifts drawn from each other’s Amazon wish lists. I received Žižek’s In Defense of Lost Causes, and was asked by the Ivy-League educated hacker founder what the book was about. I explained that the book’s lost cause was Enlightenment values, and he was totally shocked by this because he had never heard that they were even in doubt – a typical example of hackers’ ignorance of intellectual trends outside their narrow fields of engineering expertise. But this naivety may explain why some parts of the public finds Silicon Valley’s pseudo-revolutionary marketing message so compelling – their hostility to the humanities has, for good or ill, spared them the influence of postmodernity, so that they are the only segment of society that unselfconsciously adopts universal-emancipatory rhetoric. Admittedly, this rhetoric is misleading and conceals a primarily capitalist agenda. Nonetheless, the public’s misrecognition of Silicon Valley’s potential to liberate also contains a moment of truth.)
All of this is true. But it’s also a matter of perspective. The “narrow fields of engineering expertise” require, to some extent, an embrace of Enlightenment values and universal-emancipatory rhetoric. Meanwhile, the humanities, which have adopted a kind of universal-problematic rhetoric (in which intellectual victory is achieved by labeling something as ‘problematic’), are themselves insulated. Can it be truthfully said that such rhetoric is an ‘intellectual trend’ outside of the narrow fields of high brow wordslinging?
I wouldn’t know, as I’ve been exposed enough to both sides to have gotten both bugs. And, I’d guess, so has Mr. Teacup, who writes in what I believe is an hyperintellectualized parody:
The reader will find in these pages a repository of chronologically-arranged personal writings on topics at turns varied and repetitious, circulating around certain themes: the Internet and the problematics of New Media; Capitalism; Anti-Capitalism; Psychoanalysis; Film; the works of Žižek, Lacan and others; etc.
…while the author is in fact a web professional living in this century.
I think Mr. Teacup does a good job of diagnosing some of the roots of technophobia. The technophobe denies that the technologists are in fact transforming society because they believe change is possible and are terrified that it will occur, while the technologist is happy to say that Things are Changing–but just as they Always Have, though perhaps much more significantly in their era. (Isn’t the rate of technological change “increasing”? Isn’t that a natural consequence of Moore’s law?)
Those who domesticate social change are telling us that nothing is going to happen: “Yes, things will change, but don’t worry about it! Society will adjust and everything will go back to normal.” This is true conservatism. But some are afraid, because they believe change can really happen. (For example, the Tea Party is the only political group that believes in socialism, while progressives continually deny that it is a possibility.)
What if the converse is also true: those who believe in change are afraid, and this is not the same as opposing it. The technophobic nightmare scenarios of machines spinning out of control is not a delusional fantasy. On the contrary, it gives us an extremely accurate psychological representation of what genuine social change entails. The radical step is to simply endorse it. From the standpoint of the old ways, the birth of the New must be subjectively experienced as an apocalyptic event.
So, Morozov‘s loathing of the Hybrid Reality Institute is due to what again? A legitimate fear that technological change will usher in an autocratic regime that is run by technocratic industrialists without democratic consent. Mr. Teacup writes:
This reveals the general problem with deconstructing the human-technology binary: it frequently undermines legitimate grievances about the coercive uses of technology. People are not that stupid, they don’t oppose technology because they don’t realize they are always-already technologically mediated. They oppose technology because they do realize it – this is what makes it a crucial site of political resistance.
The problem, though, is that technophobia, however entertainingly it is articulated, will do nothing to stop technical change, because (as it’s already been conceded) the people responsible for technical change don’t bother reading expansive critiques informed by the intellectual trends in the humanities. Rather, it seems that technologists are developing their own intellectual tradition based on theories of the Singularity and individual rationality. A more mathematized, libertarian, and pragmatic great-grandchild of Enlightenment thought.
The question for those concerned with the death of democratic politics or the rise of technocolonialism, then, has got to be: how do you do better than whining? Given that technological change is going to happen, how can it be better steered towards less “problematic” ends?
The difficulty with this question is that it is deeply sociotechnical. Meaning, it’s a question where social and technical problems are interleaved so densely that it requires expertise from both sides of the aisle. Which means that the literati and digerati are going to have to respectfully talk to each other.
I want to jot something down while it is on my mind. It’s rather speculative, but may wind up being the theme of my thesis work.
I’ve written here about computational asymmetry in the economy. The idea is that when different agents are endowed with different capacity to compute (or are differently boundedly rational)), then that can become an extreme inequality (power law distributed, as is income) as computational power is stockpiled as a kind of capital accumulation.
Whereas a solution to unequal income is redistribution and a solution to unequal physical is regulation against violence, for computational asymmetry there is a simpler solution: “openness” in the products of computation. In particular, high quality data goods–data that is computationally rich (has more logical depth)–can be made available as public goods.
There are several challenges to this idea. One is the problem of funding. How do you encourage the production of costly public goods? The classic answer is state funding. Today we have another viable option, crowdfunding.
Another involves questions of security and privacy. Can a policy of ‘openness’ lead to problematic invasions of privacy? Viewing the problem in light of computational assymetry sheds light into this dynamic. Privacy should be a privilege of the disempowered, openness a requirement of the powerful.
In an ideal economy, agents are rewarded for their contribution to social welfare. For high quality data goods, openness leads to the maximum social welfare. So in theory, agents should be willingly adopting an open policy of their own volition. What has prevented them in the past is transactions costs and the problem of incurred risk. As institutions that reduce transaction costs and absorb risks get better, the remaining problems will be ones of regulation of noncompetitive practices.
I prepared these slides to present Fred Dretske’s paper “The Epistemology of Belief” to a class I’m taking this semester, ‘Concepts of Information’, taught by Paul Duguid and Geoff Nunberg.
Somewhere along the line I realised that if I was put on earth for one reason and one reason only, it was to make slide decks about epistemology.
I’ve had a serious interest in philosophy as a student and as a…hobbyist? can you say that?…for my entire thinking life. I considered going to graduate school for it before tossing the idea for more practical pursuits. So it comes as a delightful surprise that I’ve found an opportunity to read and work with philosophy at a graduate level through my program.
A difficult issue for a “School of Information” is defining what information is. I’ve gathered from conversations with faculty that there is an acknowledged intellectual tussle over the identity of iSchools which hinges in part on the meaning of the word. There seems to me to be roughly two ideologies at play: the cyberneticist ideology that sought to unify Shannon’s information theory, computer science, management science, economics, AI, and psychology under a coherent definition of information on the one hand, and the softer social science view that ‘information’ is a polysemous term which refers variously to newspapers and the stuff mediated by “information technology” in a loose sense but primarily as a social phenomenon.
As I’ve been steeped in the cyberneticist tradition but still consider myself literate in English and capable of recognizing social phenomena, it bothers me that people don’t see all this as just talking about the same thing in different ways.
I figured coming into the program that this was an obvious point that was widely accepted. It’s in a way nice to see that this is controversial and the arguments for this view are either unknown, unarticulated, or obscure, because that means I have some interesting work ahead of me.
This slide deck was a first stab at the problem: tying Dretske’s persuasive account of a qualitative definition of ‘information about’ to the relevant concept of Shannon’s information theory. I hope to see how far I can push this in later work. (At the point where is proves impossible, as opposed to merely difficult or non-obvious, then we’ll have discovered something new!)