Digifesto

Category: web

Goodbye, TheListserve!

Today I got an email I never thought I’d get: a message from the creators of TheListserve saying they were closing down the service after over 6 years.

TheListserve was a fantastic idea: it was a mailing list that allowed one person, randomly selected from the subscribers each day, to email everyone else.

It was an experiment in creating a different kind of conversational space on-line. And it worked great! Tens of thousands of subscribers, really interesting content–a space unlike most others in social media. You really did get a daily email with what some random person thought was the most interesting thing they had to say.

I was inspired enough by TheListserve to write a Twitter bot based on similar principles, TheTweetserve. Maybe the Twitter bot was also inspired by Habermas. It was not nearly as successful or interesting as TheListserve, for reasons that you could deduce if you thought about it.

Six years ago, “The Internet” was a very different imaginary. There was this idea that a lightweight intervention could capture some of the magic of serendipity that scale and connection had to offer, and that this was going to be really, really big.

It was, I guess, but then the charm wore off.

What’s happened now, I think, is that we’ve been so exposed to connection and scale that novelty has worn off. We now find ourselves exposed on-line mainly to the imposing weight of statistical aggregates and regressions to the mean. After years of messages to TheListserve, it started, somehow, to seem formulaic. You would get honest, encouraging advice, or a self-promotion. It became, after thousands of emails, a genre in itself.

I wonder if people who are younger and less jaded than I am are still finding and creating cool corners of the Internet. What I hear about more and more now are the ugly parts; they make the news. The Internet used to be full of creative chaos. Now it is so heavily instrumented and commercialized I get the sense that the next generation will see it much like I saw radio or television when I was growing up: as a medium dominated by companies, large and small. Something you had to work hard to break into as a professional choice or otherwise not at all.

25,000,000 re: @ftrain

It was gratifying to read Paul Ford’s reluctant think piece about the recent dress meme epidemic.

The most interesting fact in the article was that Buzzfeed’s dress article has gotten 25 million views:

People are also keenly aware that BuzzFeed garnered 25 million views (and climbing) for its article about the dress. Twenty-five million is a very, very serious number of visitors in a day — the sort of traffic that just about any global media property would kill for (while social media is like, ho hum).

I’ve recently become interested in the question: how important is the Internet, really? Those of us who work closely with it every day see it as central to our lives. Logically, we would tend to extrapolate and think that it is central to everybody’s life. If we are used to sampling from other’s experience using social media, we would see that social media is very important in everybody’s life, confirming this suspicion.

This is obviously a kind of sampling bias though.

This is where the 25,000,000 figure comes in handy. My experience of the dress meme was that it was completely ubiquitous. Literally nobody I was following on Twitter who was tweeting that day was not at least referencing the dress. The meme also got to me via an email backchannel, and came up in a seminar. Perhaps you had a similar experience: you and everyone you knew was aware of this meme.

Let’s assume that 25 million is an indicator of the order of magnitude of people that learned about this meme. If you googled the dress question, you probably clicked the article. Maybe you clicked it twice. Maybe you clicked it twenty times and you are an outlier. Maybe you didn’t click it at all. It’s plausible that it evens out and the actual number of people who were aware of the meme is somewhere between 10 million and 50 million.

That’s a lot of people. But–and this is really my point–it’s not that many people, compared to everybody. There’s about 300 million people in the United States. There’s over 7 billion people on the planet. Who are the tenth of the population who were interested in the dress? If you are reading this blog, they are probably people a lot like you or I. Who are the other ~93% of people in the U.S.?

I’ve got a bold hypothesis. My hypothesis is that the other 90% of people are people who have lives. I mean this in the sense of the idiom “get a life“, which has fallen out of fashion for some reason. Increasingly, I’m becoming interested in the vast but culturally foreign population of people who followed this advice at some point in their lives and did not turn back. Does anybody know of any good ethnographic work about them? Where do they hang out in the Bay Area?

Why we need good computational models of peace and love

“Data science” doesn’t refer to any particular technique.

It refers to the cusp of the diffusion of computational methods from computer science, statistics, and applied math (the “methodologists”) to other domains.

The background theory of these disciplines–whose origin we can trace at least as far back at cybernetics research in the 1940’s–is required to understand the validity of these “data science” technologies as scientific instruments, just as a theory of optics is necessary to know the validity of what is seen through a microscope. Kuhn calls these kinds of theoretical commitments “instrumental commitments.”

For most domain sciences, instrumental commitment to information theory, computer science, etc. is not problematic. It is more so with some social sciences which oppose the validity of totalizing physics or formalism.

There aren’t a lot of them left because our mobile phones more or less instrumentally commit us to the cybernetic worldview. Where there is room for alternative metaphysics, it is because of the complexity of emergent/functional properties of the cybernetic substrate. Brier’s Cybersemiotics is one formulation of how richer communicative meaning can be seen as a evolved structure on top of cybernetic information processing.

If “software is eating the world” and we don’t want it to eat us (metaphorically! I don’t think the robots are going to kill us–I think that corporations are going to build robots that make our lives miserable by accident), then we are going to need to have software that understands us. That requires building out cybernetic models of human communication to be more understanding of our social reality and what’s desirable in it.

That’s going to require cooperation between techies and humanists in a way that will be trying for both sides but worth the effort I think.

An Interview with the Executive Director of the Singularity Institute

Like many people, I first learned about the idea of the technological Singularity while randomly surfing the internet. It was around the year 2000. I googled “What is the meaning of life?” and found an article explaining that at the rate that artificial intelligence was progressing, we would reach a kind of computational apotheosis within fifty years. I guess at the time I thought that Google hadn’t done a bad job at answering that one, all things considered.

Since then, the Singularity’s been in the back of my mind as one of many interesting but perhaps crackpot theories of how things are going to go for us as a human race. People in my circles would dismiss it as “eschatology for nerds,” and then get back to playing Minecraft.

Since then I’ve moved to Berkeley, California, which turns out to be a hub of Singularity research. I’ve met many very smart people who are invested in reasoning about and predicting the Singularity. Though I don’t agree with all of that thinking, this exposure has given me more respect for it.

I have also learned from academic colleagues a new way to dismiss Singulatarians as “the Tea Party of the Information Society.” This piece by Evgeny Morozov is in line with this view of Singulatarianism as a kind of folk ideology used by dot-com elites to reinforce political power. (My thoughts on that piece are here.)

From where I’m standing, Singulatarianism is a controversial and a politically important world view that deserves honest intellectual scrutiny. In October, I asked Luke Muehlhauser, the Executive Director of the Singularity Institute, if I could interview him. I wanted to get a better sense of what the Singularity Institute was about and get material that could demystify Singularitarianism for others. He graciously accepted. Below is the transcript. I’ve added links where I’ve thought appropriate.


SB: Can you briefly describe the Singularity Institute?

LM: The Singularity Institute is a 501(c)(3) charity founded in the year 2000 by Eliezer Yudkowsky and some Internet entrepreneurs who supported his work for a couple years. The mission of the institute is to ensure the the creation of smarter-than-human intelligence benefits society, and the central problem we think has to do with the fact that very advanced AI’s, number one, by default will do things that humans don’t really like because humans have very complicated goals so almost all possible goals you can give an AI would be restructuring the world according to goals that are different than human goals, and then number two, the transition from human control of the planet to machine control of the planet may be very rapid because once you get an AI that is better than humans are at designing AI’s and doing AI research, then it will be able to improve its own intelligence in a loop of recursive self-improvement, and very quickly go from roughly human levels of intelligence to vastly superhuman levels of intelligence with lots of power to restructure the world according to its preferences.

SB: How did you personally get involved and what’s your role in it?

LM: I personally became involved because I was interested in the cognitive science of rationality, of changing ones mind successfully in response to evidence, and of choosing actions that are actually aimed towards achieving ones goals. Because of my interest in the subject matter I was reading the website LessWrong.com which has many articles about those subjects, and there I also encountered related material on intelligence explosion, which is this idea of a recursively self-improving artificial intelligence. And from there I read more on the subject, read a bunch of papers and articles and so on, and decided to apply to be a visiting fellow in April of 2011, or rather that’s when my visiting fellowship began, and then in September of 2011 I was hired as a researcher at the Singularity Institute, and then in November of 2011 I was made its Executive Director.

SB: So, just to clarify, is that Singularity the moment when there’s smarter than human intelligence that’s artificial?

LM: The word Singularity unfortunately has been used to mean many different things, so it is important to always clarify which meaning you are using. For our purposes you could call it the technological creation of greater than human intelligence. Other people, use it to mean something much broader and more vague, like the acceleration of technology beyond our ability to predict what will happen beyond the Singularity, or something vague like that.

SB: So what is the relationship between the artificial intelligence related question and the personal rationality related questions?

LM: Right, well the reason why the Singularity Institute has long had an interest in both rationality and safety mechanisms for artificial intelligence is that the stakes are very very high when we start thinking about artificial intelligence risks or catastrophic risks in general, and so we want our researchers to not make the kinds of cognitive mistakes that all researchers and all humans tend to make very often, which are these cognitive biases that are so well documented in psychology and behavioral economics. And so we think it’s very important for our researchers to be really world class in changing their minds in response to evidence and thinking through what the probability of different scenarios rather than going with which ones feel intuitive to us, and thinking clearly about which actions now will actually influence the future in positive ways rather than which actions will accrue status or prestige to ourselves, that sort of thing.

SB: You mentioned that some Internet entrepreneurs were involved in the starting of the organization. Who funds your organization and why do they do that?

LM: The largest single funder of the Singularity Institute is Peter Thiel, who cofounded PayPal and has been involved in several other ventures. His motivations are some concerned for existential risk, some enthusiasm for the work of our cofounder and senior researcher Eliezer Yudkowsky, and probably other reasons. Another large funder is Jaan Tallinn, the co-creator of Skype and Kazaa. He’s also concerned with existential risk and the rationality related work that we do. There are many other funders of Singularity Institute as well.

SB: Are there other organizations that do similar work?

LM: Yeah, the closest organization to what we do is the Future of Humanity Institute at Oxford University, in the United Kingdom. We collaborate with them very frequently. We go to each others’ conferences, we write papers together, and so on. The Future of Humanity Institute has a broader concern with cognitive enhancement and emerging technologies and existential risks in general, but for the past few years they have been focusing on machine superintelligence and so they’ve been working on the same issues that the Singularity Institute is devoted to. Another related organization is a new one called Global Catastrophic Risks Institute. We collaborate with them as well. And again, they are not solely focused on AI risks like the Singularity institute but on global catastrophic risks, and AI is one of them.

SB: You mentioned super human-intelligence quite a bit. Would you say that Google is a super-human intelligence?

LM: Well, yeah, so we have to be very careful about all the words that we are using of course. What I mean by intelligence is this notion of what sometimes is called optimization power, which is the ability to achieve ones goals in a wide range of environments and a wide range of constraints. And so for example, humans have a lot more optimization power than chimpanzees. That’s why even though we are slower than many animals and not as strong as many animals, we have this thing called intelligence that allows us to commence farming and science and build cities and put footprints on the moon. And so it is humans that are steering the future of the globe and not chimpanzees or stronger things like blue whales. So that’s kind of the intuitive notion. There are lots of technical papers that would be more precise. So when I am talking about super-human intelligence, I specifically mean an agent that is as good or better at humans at just about every skill set that humans possess for achieving their goals. So that would include things like not just mathematical ability or theorem proving and playing chess, but also things like social manipulation and composing music and so on, which are all functions of the brain not the kidneys.

SB: To clarify, you mentioned that humans are better than chimpanzees at achieving their goals. Do you mean humans collectively or individually? And likewise for chimpanzees.

LM: Maybe the median for chimpanzee versus the median human. There are lots of different ways that you could cash that out. I’m sure there are some humans in a vegetative state that are less effective at achieving their goals than some of the best chimpanzees.

SB: So, whatever this intelligence is, it must have goals?

LM: Yeah, well there are two ways of thinking about this. You can talk about it having a goal architecture that is explicitly written into its code that motivates its behavior. Or, that isn’t even necessary. As long as you can model its behavior as fulfilling some sort of utility function, you can describe its goals that way. In fact, that’s what we do with humans in fields like economics where you have a revealed preferences architecture. You measure a human’s preferences on a set of lotteries and from that you can extract a utility function that describes their goals. We haven’t done enough neuroscience to directly represent what humans goals are if they even have such a thing explicitly encoded in their brains.

SB: It’s interesting that you mentioned economics. So is like a corporation a kind of super-human intelligence?

LM: Um, you could model a corporation that way, except that it’s not clear that corporations are better at all humans at all different things. It would be a kind of weird corporation that was better than the best human or even the median human at all the things that humans do. Corporations aren’t usually the best in music and AI research and theory proving and stock markets and composing novels. And so there certainly are corporations that are better than median humans at certain things, like digging oil wells, but I don’t think there are corporations as good or better than humans at all things. More to the point, there is an interesting difference here because corporations are made of lots of humans and so they have the sorts of limitations on activities and intelligence that humans have. For example, they are not particularly rational in the sense defined by cognitive science. And the brains of the people that make up organizations are limited to the size of skulls, whereas you can have an AI that is the size of a warehouse. Those kinds of things.

SB: There’s a lot of industry buzz now around the term ‘big data’. I was wondering if there’s any connection between rationality or the Singularity and big data.

LM: Certainly. Big data is just another step. It provides opportunity for a lot of progress in artificial intelligence because very often it is easier to solve a problem by throwing some machine learning algorithms at a ton of data rather than trying to use your human skills for modeling a problem and coming up with an algorithm to solve it. So, big data is one of many things that, along with increased computational power, allows us to solve problems that we weren’t solving before, like machine translation or continuous speech synthesis and so on. If you give Google a trillion examples of translations from English to Chinese, then it can translate pretty well from English to Chinese without any of the programmers actually knowing Chinese.

SB: Does a super-intelligence need big data to be so super?

LM: Um, well… we don’t know because we haven’t built a super-human intelligence yet but I suspect that big data will in fact be used by the first super-human intelligences, just because big data came before super-human intelligences, it would make little sense for super-human intelligences to not avail themselves of the available techniques and resources. Such as big data. But also, such as more algorithmic insights like Bayes Nets. It would be sort of weird for a super-intelligence to not make use of the past century’s progress in probability theory.

SB: You mentioned before the transition from human control of the world to machine control of the world. How does the disproportionality of access to technology affect that if at all? For example, does the Singularity happen differently in rural India than it does in New York City?

LM: It depends a lot on what is sometimes called the ‘speed of takeoff’–whether we have a hard takeoff or a soft takeoff, or somewhere in between. To explain that, a soft takeoff would be a scenario in which you get human-level intelligence. That is, an AI that is about as good at the median human in doing the things that humans do, including composing music, doing AI research, etc. And then this breakthrough spreads quickly but still at a human time-scale as corporations replace their human workers with these human-level AI’s that are cheaper and more reliable and so on, and there is great economic and social upheaval, and the AI’s have some ability to improve their own intelligence but don’t get very far because of their own intelligence or the available computational resources and so there is a very slow transition from human control of the world to machines steering the future, where slow is on the order of years to decades.

Another possible scenario though is hard takeoff, which is once you have an AI that is better than humans at finding new insights in intelligence, it is able to improve its own intelligence roughly overnight, to find new algorithms that make it more intelligent just as we are doing now–humans are finding algorithms that make AI’s more intelligent. so now the AI is doing this, and now it has even more intelligence at its disposal to discover breakthroughs in inteligence, and then it has EVEN MORE intelligence with which to discover new breakthroughs in intelligence, and because it’s not being limited by having slow humans in the development loop, it sort of goes from roughly human levels of intelligence to vastly superhuman levels of intelligence in a matter of hours or weeks or months. And then you’ve got a machine that can engage in a global coordinated campaign to achieve its goals and neutralize the human threat to its goals in a way that happens very quickly instead of over years or decades. I don’t know which scenario will play out, so it’s hard to predict how that will go.

SB: It seems like there may be other factors besides the nature of intelligence in play. It seems like to wage a war against all humans, a hard takeoff intelligence, if I’m using the words correctly, would have to have a lot of resources available to it beyond just its intelligence.

LM: That’s right. So, that contributes to our uncertainty about how things play out. For example, does one of the first self-improving human-level artificial intelligences have access to the Internet? Or have people taken enough safety precautions that they keep it “in a box”, as they say. Then the question would be: how good is a super-human AI at manipulating its prison guards so that it can escape the box and get onto the Internet? The weakest point, hackers always know…the quickest way to get into a system is to hack the humans, because humans are stupid. So, there’s that question.

Then there’s questions like: if it gets onto the Internet, how much computing power is there available? Is there enough cheap computing power available for it to hack through a few firewalls and make a billion copies of itself overnight? Or is the computing power required for a super-human intelligence a significant fraction of the computing power available in the world, so that it can only make a few copies of itself. Another question is: what sort of resources are available for converting digital intelligence into physical actions in the human world. For example, right now you can order chemicals from a variety of labs and maybe use a bunch of emails and phone calls to intimidate a particular scientist into putting those chemicals together into a new supervirus or something, but that’s just one scenario and whenever you describe a detailed scenario like that, that particular scenario is almost certainly false and not going to happen, but there are things like that, lots of ways for digital intelligence to be converted to physical action in the world. But how many opportunities are there for that, decades from now, it’s hard to say.

SB: How do you anticipate this intelligence interacting with the social and political institutions around the Internet, supposing it gets to the Internet?

LM: Um, yeah, that’s the sort of situation where one would be tempted to start telling detailed stories about what would happen, but any detailed story would almost certainly be false. It’s really hard to say. I sort of don’t think that a super-human intelligence…if we got to a vastly smarter than human intelligence, it seems like it would probably be an extremely inefficient way for it to acheive its goals by way of causing Congress to pass a new bill somehow…that is an extremely slow and uncertain…much easier just to invent new technologies and threaten humans militarily, that sort of thing.

SB: So do you think that machine control of the world is an inevitability?

LM: Close to it. Humans are not even close to the most intelligent kind of creature you can have. They are more close to the dumbest creature you can have while also having technological civilization. If you could have a dumber creature with a technological civilization then we would be having this conversation at that level. So it looks like you can have agents that are vastly more capable of achieving their goals in the world than humans are, and there don’t seem to be any in principle barriers to doing that in machines. The usual objections that are raised like, “Will machines have intentionality?” or “Will machines have consciousness?” don’t actually matter for the question of whether they will have intelligent behavior. You don’t need intentionality or consciousness to be as good at humans at playing chess or driving cars and there’s no reason for thinking we need those things for any of the other things that we like to do. So the main factor motivating this progress is the extreme economic and military advantages to having an artificial intelligence, which will push people to develop incrementally improved systems on the way to full-blown AI. So it looks like we will get there eventually. And then it would be pretty weird situation in which you had agents that were vastly smarter than humans but that somehow humans were keeping them in cages or keeping them controlled. If we had chimpanzees running the world and humans in cages, humans would be smart enough to figure out how to break out of cages designed by chimpanzees and take over the world themselves.

SB: We are close to running out of time. There are couple more questions on my mind. One is: I think I understand that intelligence is being understood in terms of optimization power, but also that for this intelligence to count it has to be better at all things than humans….

LM: Or some large fraction of them. I’m still happy to define super-human intelligence with regard to all things that humans do, but of course for taking over the world it’s not clear that you need to be able to write novels well.

SB: Ok, so the primary sorts of goals that you are concerned about are the kinds of goals that are involved in taking over the world or are instrumental to it?

LM: Well, that’s right. And unfortunately, taking over the world is a very good idea for just about any goal that you have. Even if your goal is to maximize Exxon Mobil profits or manufacture the maximal number of paper clips or travel to a distant star, it’s a very good idea to take over the world first if you can because then you can use all available resources towards achieving your goal to the max. And also, any intelligent AI would correctly recognize that humans are the greatest threat to it achieving its goals because we will get skittish and worried about what its doing and try to shut it off. And AI will of course recognize that thats true and if it is at all intelligent will first seek to neutralize the human threat to it achieving its goals.

SB: What about intelligences that sort of use humans effectively? I’m thinking of an intelligence that was on the Internet. The Internet requires all these human actions for it to be what it is. So why would it make sense for an intelligence whose base of power was the Internet to kill all humans?

LM: Is the scenario you are imagining a kind of scenario where the AI can achieve its goals better with humans rather than neutralizing humans first? Is that what you’re asking?

SB: Yeah, I suppose.

LM: The issue is that unless you define the goals very precisely in terms of keeping humans around or benefiting humans, remember that an AI is capable of doing just about anything that humans can do and so there aren’t really things that it would need humans before unless the goal structure were specifically defined in terms of benefitting biological humans. And that’s extremely difficult to do. For example, if you found a precise way to specify “maximize human pleasure” or welfare or something, it might just mean that the AI just plugs us all into heroin drips and we never do anything cool. So it’s extremely different to specify in math–because AI’s are made of math–what it is that humans want. That gets back to the point I was making at the beginning about the complexity and fragility of human values. It turns out we don’t just value pleasure; we have this large complex of values and indeed different humans have different values from each other. So the problem of AI sort of makes an honest problem of longstanding issues in moral philosophy and value theory and so on.

SB: Ok, one last question, which is: suppose AI is taking off, and we notice that it’s taking off, and the collective intelligence of humanity working together is pitted against this artificial intelligence. Say this happens tomorrow. Who wins?

LM: Well, I mean it depends on so many unknown factors. It may be that if the intelligence is sufficiently constrained and can only improve its intelligence at a slow rate, we might actually notice that one of them is taking off and be able to pull the plug and shut it down soon enough. But that puts us in a very vulnerable state, because if one group has an AI that is capable of taking off, it probably means that other groups are only weeks or months or years or possibly decades behind. And will the correct safety precautions be taken the second, third, and twenty-fifth time?

I thank Luke Meuhlhauser for making the time for this interview. I hope to post my reflections on this at a later date.

Remixes free, originals not

I’m interested in whether others have experienced this phenomenon: a new pop (or indy pop? I can’t tell the difference any more) song gets some recognition. While it is difficult to find the original song for free on the internet, music blogs post remixes of the song for free downloading.

I’m pretty sure that this is illegal. Whatever. The question is why is this is a recurring phenomenon. First thoughts:

  • For the original song, there is incentive for the recording studio to crack down on distribution, and it seems that for the most part, they do.
  • For remixes, there is less incentive. Why is that? Here’s some possibilities:
    • They aren’t going to make any money off the remix anyway, so why bother enforcing access to it?
    • The remix is going to drive up sales of the original song by increasing people’s exposure to it, so the recording studio has reason to let the remix run free.

Any other ideas?

According to copyright law, the original copyright holder has rights to derivative works. I’m assuming that most of these remixes are made and distributed without the original copyright holder’s permission, though maybe that’s wrong. Maybe I’m just a sap, believing the myth of the underground digital remix artist, when in fact there’s based economic motives in play. There could easily be pseudonymous remix artists who ply their trade in coordination with music studios, making dance remixes of songs more or less to generate an “underground” following.

That wouldn’t be a bad thing, though as a pattern it would have the risk of crowding out original music from the music “market” through free remixes. Obviously, that’s not sustainable, which I suppose is why music blogs seem to have a time limit on how long they keep content up. Suppose: after an incubation period, the underground effect ceases to increase sales because the song has already gone “mainstream.”

If this is how things work, there’s something elegant but also diabolical about this pattern. As a mechanized process, the underground becomes nothing more than a channel through which things emerge. Cool can be a state of being on a gray market fringe, but that fringe is just a flower crafted by a larger organism to attract pollinating bees. Aficionados become part of the ecosystem, rather than advancers of it. Does the system continue to evolve?

Holy War on Kiva more fun than throwing virtual sheep

Poking around web-enabled microlending organization Kiva‘s website, something that stuck out immediately was the “Lending Teams” feature, which prominently shows which teams have been most involved in micro-financing.

There is a holy war going on between Christians and Atheists to prove who are the better people. Atheists are winning.

Kiva president Premal Shah explains the phenomenon. Lending teams make Kiva fun, because (by implication) trash talking your ideological enemies is fun.

This is important, Shah notes, if Kiva is competing primarily for people’s attention. Since a lot of microloans are paid back, the cost of participation (for people with enough liquidity) is negligible. So what prevents people from doing more microlending is that they are too preoccupied throwing virtual sheep at each other, for example.

One hopes that no matter whether the Atheists or Christians are right, Farmville burns in the End Times.