Digifesto

Category: hacker culture

thinking about meritocracy in open source communities

There has been a trend in open source development culture over the past ten years or so. It is the rejection of ‘meritocracy’. Just now, I saw this Post-Meritocracy Manifesto, originally created by Coraline Ada Ehmke. It is exactly what it sounds like: an explicit rejection of meritocracy, specifically in open source development. It captures a recent progressive wing of software development culture. It is attracting signatories.

I believe this is a “trend” because I’ve noticed a more subtle expression of similar ideas a few months ago. This came up when we were coming up with a Code of Conduct for BigBang. We wound up picking the Contributor Covenant Code of Conduct, though there’s still some open questions about how to integrate it with our Governance policy.

This Contributor Covenant is widely adopted and the language of it seems good to me. I was surprised though when I found the rationale for it specifically mentioned meritocracy as a problem the code of conduct was trying to avoid:

Marginalized people also suffer some of the unintended consequences of dogmatic insistence on meritocratic principles of governance. Studies have shown that organizational cultures that value meritocracy often result in greater inequality. People with “merit” are often excused for their bad behavior in public spaces based on the value of their technical contributions. Meritocracy also naively assumes a level playing field, in which everyone has access to the same resources, free time, and common life experiences to draw upon. These factors and more make contributing to open source a daunting prospect for many people, especially women and other underrepresented people.

If it looks familiar, it may be because it was written by the same author, Coraline Ada Ehmke.

I have to admit that though I’m quite glad that we have a Code of Conduct now in BigBang, I’m uncomfortable with the ideological presumptions of its rationale and the rejection of ‘meritocracy’. There is a lot packed into this paragraph that is open to productive disagreement and which is not necessary for a commitment to the general point that harassment is bad for an open source community.

Perhaps this would be easier for me to ignore if this political framing did not mirror so many other political tensions today, and if open source governance were not something I’ve been so invested in understanding. I’ve taught a course on open source management, and BigBang spun out of that effort as an experiment in scientific analysis of open source communities. I am, I believe, deep in on this topic.

So what’s the problem? The problem is that I think there’s something painfully misaligned about criticism of meritocracy in culture at large and open source development, which is a very particular kind of organizational form. There is also perhaps a misalignment between the progressive politics of inclusion expressed in these manifestos and what many open source communities are really trying to accomplish. Surely there must be some kind of merit that is not in scare quotes, or else there would not be any good open source software to use a raise a fuss about.

Though it does not directly address the issue, I’m reminded of an old email discussion on the Numpy mailing list that I found when I was trying to do ethnographic work on the Scientific Python community. It was a response by John Hunter, the creator of Matplotlib, in response to concerns raised when Travis Oliphant, the leader of NumPy, started Continuum Analytics and there were concerns about corporate control over NumPy. Hunter quite thoughtfully, in my opinion, debunked the idea that open source governance should be a ‘democracy’, like many people assume institutions ought to be by default. After a long discussion about how Travis had great merit as a leader, he argued:

Democracy is something that many of us have grown up by default to consider as the right solution to many, if not most or, problems of governance. I believe it is a solution to a specific problem of governance. I do not believe democracy is a panacea or an ideal solution for most problems: rather it is the right solution for which the consequences of failure are too high. In a state (by which I mean a government with a power to subject its people to its will by force of arms) where the consequences of failure to submit include the death, dismemberment, or imprisonment of dissenters, democracy is a safeguard against the excesses of the powerful. Generally, there is no reason to believe that the simple majority of people polled is the “best” or “right” answer, but there is also no reason to believe that those who hold power will rule beneficiently. The democratic ability of the people to check to the rule of the few and powerful is essential to insure the survival of the minority.

In open source software development, we face none of these problems. Our power to fork is precisely the power the minority in a tyranical democracy lacks: noone will kill us for going off the reservation. We are free to use the product or not, to modify it or not, to enhance it or not.

The power to fork is not abstract: it is essential. matplotlib, and chaco, both rely *heavily* on agg, the Antigrain C++ rendering library. At some point many years ago, Maxim, the author of Agg, decided to change the license of Agg (circa version 2.5) to GPL rather than BSD. Obviously, this was a non-starter for projects like mpl, scipy and chaco which assumed BSD licensing terms. Unfortunately, Maxim had a new employer which appeared to us to be dictating the terms and our best arguments fell on deaf ears. No matter: mpl and Enthought chaco have continued to ship agg 2.4, pre-GPL, and I think that less than 1% of our users have even noticed. Yes, we forked the project, and yes, noone has noticed. To me this is the ultimate reason why governance of open source, free projects does not need to be democratic. As painful as a fork may be, it is the ultimate antidote to a leader who may not have your interests in mind. It is an antidote that we citizens in a state government may not have.

It is true that numpy exists in a privileged position in a way that matplotlib or scipy does not. Numpy is the core. Yes, Continuum is different than STScI because Travis is both the lead of Numpy and the lead of the company sponsoring numpy. These are important differences. In the worst cases, we might imagine that these differences will negatively impact numpy and associated tools. But these worst case scenarios that we imagine will most likely simply distract us from what is going on: Travis, one of the most prolific and valuable contributers to the scientific python community, has decided to refocus his efforts to do more. And that is a very happy moment for all of us.

This is a nice articulation of how forking, not voting, is the most powerful governance mechanism in open source development, and how it changes what our default assumptions about leadership ought to be. A critical but I think unacknowledged question is to how the possibility of forking interacts with the critique of meritocracy in organizations in general, and specifically what that means for community inclusiveness as a goal in open source communities. I don’t think it’s straightforward.

Note: Nick Doty has written a nice response to this on his blog.

the “hacker class”, automation, and smart capital

(Mood music for reading this post:)

I mentioned earlier that I no longer think hacker class consciousness is important.

As incongruous as this claim is now, I’ve explained that this is coming up as I go through old notes and discard them.

I found another page of notes that reminds me there was a little more nuance to my earlier position that I remembered, which has to do with the kind of labor done by “hackers”, a term I reserve the right to use in MIT/Eric S. Raymond sense, without the political baggage that has since attached to the term.

The point was in response to Eric. S. Raymond’s “How to be a hacker” essay which was that part of what it means to be a “hacker” is to hate drudgery. The whole point of programming a computer is so that you never have to do the same activity twice. Ideally, anything that’s repeatable about the activity gets delegated to the computer.

This is relevant in the contemporary political situation because we’re probably now dealing with the upshot of structural underemployment due to automation and the resulting inequalities. This remains a topic that scholarship, technologists, and politicians seem systematically unable to address directly even when they attempt to, because everybody who sees the writing on the wall is too busy trying to get the sweet end of that deal.

It’s a very old argument that those who own the means of production are able to negotiate for a better share of the surplus value created by their collaborations with labor. Those who own or invest in capital generally speaking would like to increase that share. So there’s market pressure to replace reliance of skilled labor, which is expensive, with reliance on less skilled labor, which is plentiful.

So what gets industrialists excited is smart capital, or a means of production that performs the “skilled” functions formerly performed by labor. Call it artificial intelligence. Call it machine learning. Call it data science. Call it “the technology industry”. That’s what’s happening and been happening for some time.

This leaves good work for a single economic class of people, those whose skills are precisely those that produce this smart capital.

I never figured out what the end result of this process would be. I imagined at one point that the creation of the right open source technology would bring about a profound economic transformation. A far fetched hunch.

The node.js fork — something new to think about

For Classics we are reading Albert Hirschman’s Exit, Voice, and Loyalty. Oddly, though normally I hear about ‘voice’ as an action from within an organization, the first few chapters of the book (including the introduction of the Voice concept itselt), are preoccupied with elaborations on the neoclassical market mechanism. Not what I expected.

I’m looking for interesting research use cases for BigBang, which is about analyzing the sociotechnical dynamics of collaboration. I’m building it to better understand open source software development communities, primarily. This is because I want to create a harmonious sociotechnical superintelligence to take over the world.

For a while I’ve been interested in Hadoop’s interesting case of being one software project with two companies working together to build it. This is reminiscent (for me) of when we started GeoExt at OpenGeo and Camp2Camp. The economics of shared capital are fascinating and there are interesting questions about how human resources get organized in that sort of situation. In my experience, there becomes a tension between the needs of firms to differentiate their products and make good on their contracts and the needs of the developer community whose collective value is ultimately tied to the robustness of their technology.

Unfortunately, building out BigBang to integrate with various email, version control, and issue tracking backends is a lot of work and there’s only one of me right now to both build the infrastructure, do the research, and train new collaborators (who are starting to do some awesome work, so this is paying off.) While integrating with Apache’s infrastructure would have been a smart first move, instead I chose to focus on Mailman archives and git repositories. Google Groups and whatever Apache is using for their email lists do not publish their archives in .mbox format, which is pain for me. But luckily Google Takeout does export data from folks’ on-line inbox in .mbox format. This is great for BigBang because it means we can investigate email data from any project for which we know an insider willing to share their records.

Does a research ethics issue arise when you start working with email that is openly archived in a difficult format, then exported from somebody’s private email? Technically you get header information that wasn’t open before–perhaps it was ‘private’. But arguably this header information isn’t personal information. I think I’m still in the clear. Plus, IRB will be irrelevent when the robots take over.

All of this is a long way of getting around to talking about a new thing I’m wondering about, the Node.js fork. It’s interesting to think about open source software forks in light of Hirschman’s concepts of Exit and Voice since so much of the activity of open source development is open, virtual communication. While you might at first think a software fork is definitely a kind of Exit, it sounds like IO.js was perhaps a friendly fork of just somebody who wanted to hack around. In theory, code can be shared between forks–in fact this was the principle that GitHub’s forking system was founded on. So there are open questions (to me, who isn’t involved in the Node.js community at all and is just now beginning to wonder about it) along the lines of to what extent a fork is a real event in the history of the project, vs. to what extent it’s mythological, vs. to what extent it’s a reification of something that was already implicit in the project’s sociotechnical structure. There are probably other great questions here as well.

A friend on the inside tells me all the action on this happened (is happening?) on the GitHub issue tracker, which is definitely data we want to get BigBang connected with. Blissfully, there appear to be well supported Python libraries for working with the GitHub API. I expect the first big hurdle we hit here will be rate limiting.

Though we haven’t been able to make integration work yet, I’m still hoping there’s some way we can work with MetricsGrimoire. They’ve been a super inviting community so far. But our software stacks and architecture are just different enough, and the layers we’ve built so far thin enough, that it’s hard to see how to do the merge. A major difference is that while MetricsGrimoire tools are built to provide application interfaces around a MySQL data backend, since BigBang is foremost about scientific analysis our whole data pipeline is built to get things into Pandas dataframes. Both projects are in Python. This too is a weird microcosm of the larger sociotechnical ecosystem of software production, of which the “open” side is only one (important) part.

technical work

Dipping into Julian Orr’s Talking about Machines, an ethnography of Xerox photocopier technicians, has set off some light bulbs for me.

First, there’s Orr’s story: Orr dropped out of college and got drafted, then worked as a technician in the military before returning to school. He paid the bills doing technical repair work, and found it convenient to do his dissertation on those doing photocopy repair.

Orr’s story reminds me of my grandfather and great-uncle, both of whom were technicians–radio operators–during WWII. Their civilian careers were as carpenters, building houses.

My own dissertation research is motivated by my work background as an open source engineer, and my own desire to maintain and improve my technical chops. I’d like to learn to be a data scientist; I’m also studying data scientists at work.

Further fascinating was Orr’s discussion of the Xerox technician’s identity as technicians as opposed to customers:

The distinction between technician and customer is a critical division of this population, but for technicians at work, all nontechnicians are in some category of other, including the corporation that employs the technicians, which is seen as alien, distant, and only sometimes an ally.

It’s interesting to read about this distinction between technicians and others in the context of Xerox photocopiers when I’ve been so affected lately by the distinction between tech folk and others and data scientists and others. This distinction between those who do technical work and those who they serve is a deep historical one that transcends the contemporary and over-computed world.

I recall my earlier work experience. I was a decent engineer and engineering project manager. I was a horrible account manager. My customer service skills were abysmal, because I did not empathize with the client. The open source context contributes to this attitude, because it makes a different set of demands on its users than consumer technology does. One gets assistance with consumer grade technology by hiring a technician who treats you as a customer. You get assistance with open source technology by joining the community of practice as a technician. Commercial open source software, according to the Pentaho beekeeper model, is about providing, at cost, that customer support.

I’ve been thinking about customer service and reflecting on my failures at it a lot lately. It keeps coming up. Mary Gray’s piece, When Science, Customer Service, and Human Subjects Research Collide explicitly makes the connection between commercial data science at Facebook and customer service. The ugly dispute between Gratipay (formerly Gittip) and Shanley Kane was, I realized after the fact, a similar crisis between the expectations of customers/customer service people and the expectations of open source communities. When “free” (gratis) web services display a similar disregard for their users as open source communities do, it’s harder to justify in the same way that FOSS does. But there are similar tensions, perhaps. It’s hard for technicians to empathize with non-technicians about their technical problems, because their lived experience is so different.

It’s alarming how much is being hinged on the professional distinction between technical worker and non-technical worker. The intra-technology industry debates are thick with confusions along these lines. What about marketing people in the tech context? Sales? Are the “tech folks” responsible for distributional justice today? Are they in the throws of an ideology? I was reading a paper the other day suggesting that software engineers should be held ethically accountable for the implicit moral implications of their algorithms. Specifically the engineers; for some reason not the designers or product managers or corporate shareholders, who were not mentioned. An interesting proposal.

Meanwhile, at the D-Lab, where I work, I’m in the process of navigating my relationship between two teams, the Technical Team, and the Services Team. I have been on the Technical team in the past. Our work has been to stay on top of and assist people with data science software and infrastructure. Early on, we abolished regular meetings as a waste of time. Naturally, there was a suspicion expressed to me at one point that we were unaccountable and didn’t do as much work as others on the Services team, which dealt directly with the people-facing component of the lab–scheduling workshops, managing the undergraduate work-study staff. Sitting in on Services meetings for the first time this semester, I’ve been struck by how much work the other team does. By and large, it’s information work: calendering, scheduling, entering into spreadsheets, documenting processes in case of turnover, sending emails out, responding to emails. All important work.

This is exactly the work that information technicians want to automate away. If there is a way to reduce the amount of calendering and entering into spreadsheets, programmers will find a way. The whole purpose of computer science is to automate tasks that would otherwise be tedious.

Eric S. Raymond’s classic (2001) essay How to Become a Hacker characterizes the Hacker Attitude, in five points:

  1. The world is full of fascinating problems waiting to be solved.
  2. No problem should ever have to be solved twice.
  3. Boredom and drudgery are evil.
  4. Freedom is good.
  5. Attitude is no substitute for competence.

There is no better articulation of the “ideology” of “tech folks” than this, in my opinion, yet Raymond is not used much as a source for understanding the idiosyncracies of the technical industry today. Of course, not all “hackers” are well characterized by Raymond (I’m reminded of Coleman’s injunction to speak of “cultures of hacking”) and not all software engineers are hackers (I’m sure my sister, a software engineer, is not a hacker. For example, based on my conversations with her, it’s clear that she does not see all the unsolved problems with the world to be intrinsically fascinating. Rather, she finds problems that pertain to some human interest, like children’s education, to be most motivating. I have no doubt that she is a much better software engineer than I am–she has worked full time at it for many years and now works for a top tech company. As somebody closer to the Raymond Hacker ethic, I recognize that my own attitude is no substitute for that competence, and hold my sister’s abilities in very high esteem.)

As usual, I appear to have forgotten where I was going with this.

picking a data backend for representing email in #python

I’m at a difficult crossroads with BigBang where I need to pick an appropriate data storage backend for my preprocessed mailing list data.

There are a lot of different aspects to this problem.

The first and most important consideration is speed. If you know anything about computer science, you know that it exists to quickly execute complex tasks that would take too long to do by hand. It’s odd writing that sentence since computational complexity considerations are so fundamental to algorithm design that this can go unspoken in most technical contexts. But since coming to grad school I’ve found myself writing for a more diverse audience, so…

The problem I’m facing is that in doing exploratory data analysis, I do not know all the questions I am going to ask yet. But any particular question will be impractical to ask unless I tune the underlying infrastructure to answer it. This chicken-and-egg problem means that the process of inquiry is necessarily constrained by the engineering options that are available.

This is not new in scientific practice. Notoriously, the field of economics in the 20th century was shaped by what was analytically tractable as formal, mathematical results. The nuance of contemporary modeling of complex systems is due largely to the fact that we now have computers to do this work for us. That means we can still have the intersubjectively verified rigor that comes with mathematization without trying to fit square pegs into round holes. (Side note: something mathematicians acknowledge that others tend to miss is that mathematics is based on dialectic proof and intersubjective agreement. This makes it much closer epistemologically to something like history as a discipline than it is to technical fields dedicated to prediction and control, like chemistry or structural engineering. Computer science is in many ways an extension of mathematics. Obviously, these formalizations are then applied to great effect. Their power comes from their deep intersubjective validity–in other words, their truth. Disciplines that have dispensed with intersubjective validity as a grounds for truth claims in favor of a more nebulous sense of diverse truths in a manifold of interpretation have difficulty understanding this and so are likely to see the institutional gains of computer scientists to be a result of political manipulation, as opposed to something more basic: mastery of nature, or more provacatively, use of force. This disciplinary disfunction is one reason why these groups see their influence erode.)

For example, I have determined that in order to implement a certain query on the data efficiently, it would be best if another query were constant time. One way to do this is to use a database with an index.

However, setting up a database is something that requires extra work on the part of the programmer and so makes it harder to reproduce results. So far I have been keeping my processed email data “in memory” after it is pulled from files on the file system. This means that I have access to the data within the programming environment I’m most comfortable with, without depending on an external or parallel process. Fewer moving parts means that it is simpler to do my work.

So there is a tradeoff between the computational time of the software as it executes and the time and attention is takes me (and others that want to reproduce my results) to set up the environment in which the software runs. Since I am running this as an open source project and hope others will build on my work, I have every reason to be lazy, in a certain sense. Every inconvenience I suffer is one that will be suffered by everyone that follows me. There is a Kantian categorical imperative to keep things as simple as possible for people, to take any complex procedure and replace it with a script, so that others can do original creative thinking, solve the next problem. This is the imperative that those of us embedded in this culture have internalized. (G. Coleman notes that there are many cultures of hacking; I don’t know how prevalent these norms are, to be honest; I’m speaking from my experience) It is what makes this social process of developing our software infrastructure a social one with a modernist sense of progress. We are part of something that is being built out.

There are also social and political considerations. I am building this project intentionally in a way that is embedded within the Scientific Python ecosystem, as they are also my object of study. Certain projects are trendy right now, and for good reason. At the Python Worker’s Party at Berkeley last Friday, I saw a great presentation of Blaze. Blaze is a project that allows programmers experienced with older idioms of scientific Python programming to transfer their skills to systems that can handle more data, like Spark. This is exciting for the Python community. In such a fast moving field with multiple interoperating ecosystems, there is always the anxiety that ones skills are no longer the best skills to have. Has your expertise been made obsolete? So there is a huge demand for tools that adapt one way of thinking to a new system. As more data has become available, people have engineered new sophisticated processing backends. Often these are not done in Python, which has a reputation for being very usable and accessible but slow to run in operation. Getting the usable programming interface to interoperate with the carefully engineered data backends is hard work, work that Matt Rocklin is doing while being paid by Continuum Analytics. That is sweet.

I’m eager to try out Blaze. But as I think through the questions I am trying to ask about open source projects, I’m realizing that they don’t fit easily into the kind of data processing that Blaze currently supports. Perhaps this is dense on my part. If I knew better what I was asking, I could maybe figure out how to make it fit. But probably, what I’m looking at is data that is not “big”, that does not need the kind of power that these new tools provide. Currently my data fits on my laptop. It even fits in memory! Shouldn’t I build something that works well for what I need it for, and not worry about scaling at this point?

But I’m also trying to think long-term. What happens if an when it does scale up? What if I want to analyze ALL the mailing list data? Is that “big” data?

“Premature optimization is the root of all evil.” – Donald Knuth

An Interview with the Executive Director of the Singularity Institute

Like many people, I first learned about the idea of the technological Singularity while randomly surfing the internet. It was around the year 2000. I googled “What is the meaning of life?” and found an article explaining that at the rate that artificial intelligence was progressing, we would reach a kind of computational apotheosis within fifty years. I guess at the time I thought that Google hadn’t done a bad job at answering that one, all things considered.

Since then, the Singularity’s been in the back of my mind as one of many interesting but perhaps crackpot theories of how things are going to go for us as a human race. People in my circles would dismiss it as “eschatology for nerds,” and then get back to playing Minecraft.

Since then I’ve moved to Berkeley, California, which turns out to be a hub of Singularity research. I’ve met many very smart people who are invested in reasoning about and predicting the Singularity. Though I don’t agree with all of that thinking, this exposure has given me more respect for it.

I have also learned from academic colleagues a new way to dismiss Singulatarians as “the Tea Party of the Information Society.” This piece by Evgeny Morozov is in line with this view of Singulatarianism as a kind of folk ideology used by dot-com elites to reinforce political power. (My thoughts on that piece are here.)

From where I’m standing, Singulatarianism is a controversial and a politically important world view that deserves honest intellectual scrutiny. In October, I asked Luke Muehlhauser, the Executive Director of the Singularity Institute, if I could interview him. I wanted to get a better sense of what the Singularity Institute was about and get material that could demystify Singularitarianism for others. He graciously accepted. Below is the transcript. I’ve added links where I’ve thought appropriate.


SB: Can you briefly describe the Singularity Institute?

LM: The Singularity Institute is a 501(c)(3) charity founded in the year 2000 by Eliezer Yudkowsky and some Internet entrepreneurs who supported his work for a couple years. The mission of the institute is to ensure the the creation of smarter-than-human intelligence benefits society, and the central problem we think has to do with the fact that very advanced AI’s, number one, by default will do things that humans don’t really like because humans have very complicated goals so almost all possible goals you can give an AI would be restructuring the world according to goals that are different than human goals, and then number two, the transition from human control of the planet to machine control of the planet may be very rapid because once you get an AI that is better than humans are at designing AI’s and doing AI research, then it will be able to improve its own intelligence in a loop of recursive self-improvement, and very quickly go from roughly human levels of intelligence to vastly superhuman levels of intelligence with lots of power to restructure the world according to its preferences.

SB: How did you personally get involved and what’s your role in it?

LM: I personally became involved because I was interested in the cognitive science of rationality, of changing ones mind successfully in response to evidence, and of choosing actions that are actually aimed towards achieving ones goals. Because of my interest in the subject matter I was reading the website LessWrong.com which has many articles about those subjects, and there I also encountered related material on intelligence explosion, which is this idea of a recursively self-improving artificial intelligence. And from there I read more on the subject, read a bunch of papers and articles and so on, and decided to apply to be a visiting fellow in April of 2011, or rather that’s when my visiting fellowship began, and then in September of 2011 I was hired as a researcher at the Singularity Institute, and then in November of 2011 I was made its Executive Director.

SB: So, just to clarify, is that Singularity the moment when there’s smarter than human intelligence that’s artificial?

LM: The word Singularity unfortunately has been used to mean many different things, so it is important to always clarify which meaning you are using. For our purposes you could call it the technological creation of greater than human intelligence. Other people, use it to mean something much broader and more vague, like the acceleration of technology beyond our ability to predict what will happen beyond the Singularity, or something vague like that.

SB: So what is the relationship between the artificial intelligence related question and the personal rationality related questions?

LM: Right, well the reason why the Singularity Institute has long had an interest in both rationality and safety mechanisms for artificial intelligence is that the stakes are very very high when we start thinking about artificial intelligence risks or catastrophic risks in general, and so we want our researchers to not make the kinds of cognitive mistakes that all researchers and all humans tend to make very often, which are these cognitive biases that are so well documented in psychology and behavioral economics. And so we think it’s very important for our researchers to be really world class in changing their minds in response to evidence and thinking through what the probability of different scenarios rather than going with which ones feel intuitive to us, and thinking clearly about which actions now will actually influence the future in positive ways rather than which actions will accrue status or prestige to ourselves, that sort of thing.

SB: You mentioned that some Internet entrepreneurs were involved in the starting of the organization. Who funds your organization and why do they do that?

LM: The largest single funder of the Singularity Institute is Peter Thiel, who cofounded PayPal and has been involved in several other ventures. His motivations are some concerned for existential risk, some enthusiasm for the work of our cofounder and senior researcher Eliezer Yudkowsky, and probably other reasons. Another large funder is Jaan Tallinn, the co-creator of Skype and Kazaa. He’s also concerned with existential risk and the rationality related work that we do. There are many other funders of Singularity Institute as well.

SB: Are there other organizations that do similar work?

LM: Yeah, the closest organization to what we do is the Future of Humanity Institute at Oxford University, in the United Kingdom. We collaborate with them very frequently. We go to each others’ conferences, we write papers together, and so on. The Future of Humanity Institute has a broader concern with cognitive enhancement and emerging technologies and existential risks in general, but for the past few years they have been focusing on machine superintelligence and so they’ve been working on the same issues that the Singularity Institute is devoted to. Another related organization is a new one called Global Catastrophic Risks Institute. We collaborate with them as well. And again, they are not solely focused on AI risks like the Singularity institute but on global catastrophic risks, and AI is one of them.

SB: You mentioned super human-intelligence quite a bit. Would you say that Google is a super-human intelligence?

LM: Well, yeah, so we have to be very careful about all the words that we are using of course. What I mean by intelligence is this notion of what sometimes is called optimization power, which is the ability to achieve ones goals in a wide range of environments and a wide range of constraints. And so for example, humans have a lot more optimization power than chimpanzees. That’s why even though we are slower than many animals and not as strong as many animals, we have this thing called intelligence that allows us to commence farming and science and build cities and put footprints on the moon. And so it is humans that are steering the future of the globe and not chimpanzees or stronger things like blue whales. So that’s kind of the intuitive notion. There are lots of technical papers that would be more precise. So when I am talking about super-human intelligence, I specifically mean an agent that is as good or better at humans at just about every skill set that humans possess for achieving their goals. So that would include things like not just mathematical ability or theorem proving and playing chess, but also things like social manipulation and composing music and so on, which are all functions of the brain not the kidneys.

SB: To clarify, you mentioned that humans are better than chimpanzees at achieving their goals. Do you mean humans collectively or individually? And likewise for chimpanzees.

LM: Maybe the median for chimpanzee versus the median human. There are lots of different ways that you could cash that out. I’m sure there are some humans in a vegetative state that are less effective at achieving their goals than some of the best chimpanzees.

SB: So, whatever this intelligence is, it must have goals?

LM: Yeah, well there are two ways of thinking about this. You can talk about it having a goal architecture that is explicitly written into its code that motivates its behavior. Or, that isn’t even necessary. As long as you can model its behavior as fulfilling some sort of utility function, you can describe its goals that way. In fact, that’s what we do with humans in fields like economics where you have a revealed preferences architecture. You measure a human’s preferences on a set of lotteries and from that you can extract a utility function that describes their goals. We haven’t done enough neuroscience to directly represent what humans goals are if they even have such a thing explicitly encoded in their brains.

SB: It’s interesting that you mentioned economics. So is like a corporation a kind of super-human intelligence?

LM: Um, you could model a corporation that way, except that it’s not clear that corporations are better at all humans at all different things. It would be a kind of weird corporation that was better than the best human or even the median human at all the things that humans do. Corporations aren’t usually the best in music and AI research and theory proving and stock markets and composing novels. And so there certainly are corporations that are better than median humans at certain things, like digging oil wells, but I don’t think there are corporations as good or better than humans at all things. More to the point, there is an interesting difference here because corporations are made of lots of humans and so they have the sorts of limitations on activities and intelligence that humans have. For example, they are not particularly rational in the sense defined by cognitive science. And the brains of the people that make up organizations are limited to the size of skulls, whereas you can have an AI that is the size of a warehouse. Those kinds of things.

SB: There’s a lot of industry buzz now around the term ‘big data’. I was wondering if there’s any connection between rationality or the Singularity and big data.

LM: Certainly. Big data is just another step. It provides opportunity for a lot of progress in artificial intelligence because very often it is easier to solve a problem by throwing some machine learning algorithms at a ton of data rather than trying to use your human skills for modeling a problem and coming up with an algorithm to solve it. So, big data is one of many things that, along with increased computational power, allows us to solve problems that we weren’t solving before, like machine translation or continuous speech synthesis and so on. If you give Google a trillion examples of translations from English to Chinese, then it can translate pretty well from English to Chinese without any of the programmers actually knowing Chinese.

SB: Does a super-intelligence need big data to be so super?

LM: Um, well… we don’t know because we haven’t built a super-human intelligence yet but I suspect that big data will in fact be used by the first super-human intelligences, just because big data came before super-human intelligences, it would make little sense for super-human intelligences to not avail themselves of the available techniques and resources. Such as big data. But also, such as more algorithmic insights like Bayes Nets. It would be sort of weird for a super-intelligence to not make use of the past century’s progress in probability theory.

SB: You mentioned before the transition from human control of the world to machine control of the world. How does the disproportionality of access to technology affect that if at all? For example, does the Singularity happen differently in rural India than it does in New York City?

LM: It depends a lot on what is sometimes called the ‘speed of takeoff’–whether we have a hard takeoff or a soft takeoff, or somewhere in between. To explain that, a soft takeoff would be a scenario in which you get human-level intelligence. That is, an AI that is about as good at the median human in doing the things that humans do, including composing music, doing AI research, etc. And then this breakthrough spreads quickly but still at a human time-scale as corporations replace their human workers with these human-level AI’s that are cheaper and more reliable and so on, and there is great economic and social upheaval, and the AI’s have some ability to improve their own intelligence but don’t get very far because of their own intelligence or the available computational resources and so there is a very slow transition from human control of the world to machines steering the future, where slow is on the order of years to decades.

Another possible scenario though is hard takeoff, which is once you have an AI that is better than humans at finding new insights in intelligence, it is able to improve its own intelligence roughly overnight, to find new algorithms that make it more intelligent just as we are doing now–humans are finding algorithms that make AI’s more intelligent. so now the AI is doing this, and now it has even more intelligence at its disposal to discover breakthroughs in inteligence, and then it has EVEN MORE intelligence with which to discover new breakthroughs in intelligence, and because it’s not being limited by having slow humans in the development loop, it sort of goes from roughly human levels of intelligence to vastly superhuman levels of intelligence in a matter of hours or weeks or months. And then you’ve got a machine that can engage in a global coordinated campaign to achieve its goals and neutralize the human threat to its goals in a way that happens very quickly instead of over years or decades. I don’t know which scenario will play out, so it’s hard to predict how that will go.

SB: It seems like there may be other factors besides the nature of intelligence in play. It seems like to wage a war against all humans, a hard takeoff intelligence, if I’m using the words correctly, would have to have a lot of resources available to it beyond just its intelligence.

LM: That’s right. So, that contributes to our uncertainty about how things play out. For example, does one of the first self-improving human-level artificial intelligences have access to the Internet? Or have people taken enough safety precautions that they keep it “in a box”, as they say. Then the question would be: how good is a super-human AI at manipulating its prison guards so that it can escape the box and get onto the Internet? The weakest point, hackers always know…the quickest way to get into a system is to hack the humans, because humans are stupid. So, there’s that question.

Then there’s questions like: if it gets onto the Internet, how much computing power is there available? Is there enough cheap computing power available for it to hack through a few firewalls and make a billion copies of itself overnight? Or is the computing power required for a super-human intelligence a significant fraction of the computing power available in the world, so that it can only make a few copies of itself. Another question is: what sort of resources are available for converting digital intelligence into physical actions in the human world. For example, right now you can order chemicals from a variety of labs and maybe use a bunch of emails and phone calls to intimidate a particular scientist into putting those chemicals together into a new supervirus or something, but that’s just one scenario and whenever you describe a detailed scenario like that, that particular scenario is almost certainly false and not going to happen, but there are things like that, lots of ways for digital intelligence to be converted to physical action in the world. But how many opportunities are there for that, decades from now, it’s hard to say.

SB: How do you anticipate this intelligence interacting with the social and political institutions around the Internet, supposing it gets to the Internet?

LM: Um, yeah, that’s the sort of situation where one would be tempted to start telling detailed stories about what would happen, but any detailed story would almost certainly be false. It’s really hard to say. I sort of don’t think that a super-human intelligence…if we got to a vastly smarter than human intelligence, it seems like it would probably be an extremely inefficient way for it to acheive its goals by way of causing Congress to pass a new bill somehow…that is an extremely slow and uncertain…much easier just to invent new technologies and threaten humans militarily, that sort of thing.

SB: So do you think that machine control of the world is an inevitability?

LM: Close to it. Humans are not even close to the most intelligent kind of creature you can have. They are more close to the dumbest creature you can have while also having technological civilization. If you could have a dumber creature with a technological civilization then we would be having this conversation at that level. So it looks like you can have agents that are vastly more capable of achieving their goals in the world than humans are, and there don’t seem to be any in principle barriers to doing that in machines. The usual objections that are raised like, “Will machines have intentionality?” or “Will machines have consciousness?” don’t actually matter for the question of whether they will have intelligent behavior. You don’t need intentionality or consciousness to be as good at humans at playing chess or driving cars and there’s no reason for thinking we need those things for any of the other things that we like to do. So the main factor motivating this progress is the extreme economic and military advantages to having an artificial intelligence, which will push people to develop incrementally improved systems on the way to full-blown AI. So it looks like we will get there eventually. And then it would be pretty weird situation in which you had agents that were vastly smarter than humans but that somehow humans were keeping them in cages or keeping them controlled. If we had chimpanzees running the world and humans in cages, humans would be smart enough to figure out how to break out of cages designed by chimpanzees and take over the world themselves.

SB: We are close to running out of time. There are couple more questions on my mind. One is: I think I understand that intelligence is being understood in terms of optimization power, but also that for this intelligence to count it has to be better at all things than humans….

LM: Or some large fraction of them. I’m still happy to define super-human intelligence with regard to all things that humans do, but of course for taking over the world it’s not clear that you need to be able to write novels well.

SB: Ok, so the primary sorts of goals that you are concerned about are the kinds of goals that are involved in taking over the world or are instrumental to it?

LM: Well, that’s right. And unfortunately, taking over the world is a very good idea for just about any goal that you have. Even if your goal is to maximize Exxon Mobil profits or manufacture the maximal number of paper clips or travel to a distant star, it’s a very good idea to take over the world first if you can because then you can use all available resources towards achieving your goal to the max. And also, any intelligent AI would correctly recognize that humans are the greatest threat to it achieving its goals because we will get skittish and worried about what its doing and try to shut it off. And AI will of course recognize that thats true and if it is at all intelligent will first seek to neutralize the human threat to it achieving its goals.

SB: What about intelligences that sort of use humans effectively? I’m thinking of an intelligence that was on the Internet. The Internet requires all these human actions for it to be what it is. So why would it make sense for an intelligence whose base of power was the Internet to kill all humans?

LM: Is the scenario you are imagining a kind of scenario where the AI can achieve its goals better with humans rather than neutralizing humans first? Is that what you’re asking?

SB: Yeah, I suppose.

LM: The issue is that unless you define the goals very precisely in terms of keeping humans around or benefiting humans, remember that an AI is capable of doing just about anything that humans can do and so there aren’t really things that it would need humans before unless the goal structure were specifically defined in terms of benefitting biological humans. And that’s extremely difficult to do. For example, if you found a precise way to specify “maximize human pleasure” or welfare or something, it might just mean that the AI just plugs us all into heroin drips and we never do anything cool. So it’s extremely different to specify in math–because AI’s are made of math–what it is that humans want. That gets back to the point I was making at the beginning about the complexity and fragility of human values. It turns out we don’t just value pleasure; we have this large complex of values and indeed different humans have different values from each other. So the problem of AI sort of makes an honest problem of longstanding issues in moral philosophy and value theory and so on.

SB: Ok, one last question, which is: suppose AI is taking off, and we notice that it’s taking off, and the collective intelligence of humanity working together is pitted against this artificial intelligence. Say this happens tomorrow. Who wins?

LM: Well, I mean it depends on so many unknown factors. It may be that if the intelligence is sufficiently constrained and can only improve its intelligence at a slow rate, we might actually notice that one of them is taking off and be able to pull the plug and shut it down soon enough. But that puts us in a very vulnerable state, because if one group has an AI that is capable of taking off, it probably means that other groups are only weeks or months or years or possibly decades behind. And will the correct safety precautions be taken the second, third, and twenty-fifth time?

I thank Luke Meuhlhauser for making the time for this interview. I hope to post my reflections on this at a later date.

Another rant about academia and open source

A few weeks ago I went to a great talk by Victoria Stodden about how there’s a crisis of confidence in scientific research that depends on heavy computing. Long story short, because the data and code aren’t openly available, the results aren’t reproducible. That means there’s no check on prior research, and bad results can slip through and be the foundation for future work. This is bad.

Stodden’s solution was to push forward within the scientific community and possibly in legislation (i.e., as a requirement on state-funded research) for open data and code in research. Right on!

Then, something intriguing: somebody in the audience asked how this relates to open source development. Stodden, who just couldn’t stop saying amazing things that needed to be said that day, answered by saying that scientists have a lot to learn from the “open source world”, because they know how to build strong communities around their (open) work.

Looking around the room at this point, I saw several scientists toying with their laptops. I don’t think they were listening.

It’s a difficult thing coming from an open source background and entering academia, because the norms are close, but off.

The other day I wrote in an informal departmental mailing list a criticism and questions about a theorist with a lot of influence in the department, Bruno Latour. There were a lot of reactions to that thread that ranged pretty much all across the board, but one of the surprising reactions I got was along the lines of “I’m not going to do your work for you by answering your question about Latour.” In other words, RTFM. Except, in this case, “the manual” was a book or two of dense academic literature in a field that I was just beginning to dip into.

I don’t want to make too much of this response, since there were a lot of extenuating circumstances, but it did strike me as an indication of one of the cultural divides between open source development and academic scholarship. In the former, you want as many people as possible to understand and use your cool new thing because that enriches your community and makes your feel better about your contribution to the world. For some kinds of scholars, being the only one who understands a thing is a kind of distinction that gives you pride and job opportunities, so you don’t really want other people to know as much as you about it.

Similarly for computationally heavy sciences: if you think your job is to get grants to fund your research, you don’t really want anybody picking through it and telling you your methodology was busted. In an Internet Security course this semester, I’ve had the pleasure of reading John McHugh’s Testing Intrusion Detection Systems: A Critique of the 1998 and 1999 DARPA Off-line Intrusion Detection System Evaluation as Performed by Lincoln Laboratory. In this incredible paper, McHugh explains why a particular DARPA-funded Lincoln Labs Intrusion Detection research paper is BS, scientifically speaking.

In open source development, we would call McHugh’s paper a bug report. We would say, “McHugh is a great user of our research because he went through and tested for all these bugs, and even has recommendations about how to fix them. This is fantastic! The next release is going to be great.”

In the world of security research, Lincoln Labs complained to the publisher and got the article pulled.

Ok, so security research is a new field with a lot of tough phenomena to deal with and not a ton of time to read up on 300 years of epistemology, philosophy of science, statistical learning theory, or each others’ methodological critiques. I’m not faulting the research community at all. However, it does show some of the trouble that happens in a field that is born out of industry and military funding concerns without the pretensions or emphasis on reproducible truth-discovery that you get in, say, physics.

All of this, it so happens, is what Lyotard describes in his monograph, The Postmodern Condition (1979). Lyotard argues that because of cybernetics and information technologies, because of Wittgenstein, because of the “collapse of metanarratives” that would make anybody believe in anything silly like “truth”, there’s nothing left to legitimize knowledge except Winning.

You can win in two ways: you can research something that helps somebody beat somebody else up or consume more, so that they give you funding. Or you can win by not losing, by pulling some wild theoretical stunt that puts you out of range of everybody else so that they can’t come after you. You become good at critiquing things in ways that sound smart, and tell people who disagree with you that they haven’t read your cannon. You hope that if they call your bluff and read it, they will be so converted by the experience that they will leave you alone.

Some, but certainly not all, of academia seems like this. You can still find people around who believe in epistemic standards: rational deduction, dialectical critique resolving to a consensus, sound statistical induction. Often people will see these as just a kind of meta-methodology in service to a purely pragmatic ideal of something that works well or looks pretty or makes you think in a new way, but that in itself isn’t so bad. Not everybody should be anal about methodology.

But these standards are in tension with the day to day of things, because almost nobody really believes that they are after true ideas any more. It’s so easy to be cynical or territorial.

What seems to be missing is a sense of common purpose in academic work. Maybe it’s the publication incentive structure, maybe it’s because academia is an ideological proxy for class or sex warfare, maybe it’s because of a lot of big egos, maybe it’s the collapse of meta-narratives.

In FOSS development, there’s a secret ethic that’s not particularly well articulated by either the Free Software Movement or the Open Source Initiative, but which I believe is shared by a lot of developers. It goes something like this:

I’m going to try to build a totally great new thing. It’s going to be a lot of work, but it will be worth it because it’s going to be so useful and cool. Gosh, it would be helpful if other people worked on it with me, because this is a lonely pursuit and having others work with me will help me know I’m not chasing after a windmill. If somebody wants to work on it with me, I’m going to try hard to give them what they need to work on it. But hell, even if somebody tells me they used it and found six problems in it, that’s motivating; that gives me something to strive for. It means I have (or had) a user. Users are awesome; they make my heart swell with pride. Also, bonus, having lots of users means people want to pay me for services or hire me or let me give talks. But it’s not like I’m trying to keep others out of this game, because there is just so much that I wish we could build and not enough time! Come on! Let’s build the future together!

I think this is the sort of ethic that leads to the kind of community building that Stodden was talking about. It requires a leap of faith: that your generosity will pay off and that the world won’t run out of problems to be solved. It requires self-confidence because you have to believe that you have something (even something small) to offer that will make you a respected part of an open community without walls to shelter you from criticism. But this ethic is the relentlessly spreading meme of the 21st century and it’s probably going to be victorious by the start of the 22nd. So if we want our academic work to have staying power we better get on this wagon early so we can benefit from the centrality effects in the growing openly collaborative academic network.

I heard David Weinberger give a talk last year on his new book Too Big to Know, in which he argued that “the next Darwin” was going to be actively involved in social media as a research methodology. Tracing their research notes will involve an examination of their inbox and facebook feed to see what conversations were happening, because just so much knowledge transfer is happening socially and digitally and it’s faster and more contextual than somebody spending a weekend alone reading books in a library. He’s right, except maybe for one thing, which is that this digital dialectic (or pluralectic) implies that “the next Darwin” isn’t just one dude, Darwin, with his own ‘-ism’ and pernicious Social adherents. Rather, it means that the next great theory of the origin of species is going to be built by a massive collaborative effort in which lots of people will take an active part. The historical record will show their contributions not just with the clumsy granularity of conference publications and citations, but with minute granularity of thousands of traced conversations. The theory itself will probably be too complicated for any one person to understand, but that’s OK, because it will be well architected and there will be plenty of domain experts to go to if anyone has problems with any particular part of it. And it will be growing all the time and maybe competing with a few other theories. For a while people might have to dual boot their brains until somebody figures out how to virtualize Foucauldean Quantum Mechanics on a Organic Data Splicing ideological platform, but one day some crazy scholar-hacker will find a way.

“Cool!” they will say, throwing a few bucks towards the Kickstarter project for a musical instrument that plays to the tune of the uncollapsed probabilistic power dynamics playing out between our collated heartbeats.

Does that future sound good? Good. Because it’s already starting. It’s just an evolution of the way things have always been, and I’m pretty sure based on what I’ve been hearing that it’s a way of doing things that’s picking of steam. It’s just not “normal” yet. Generation gap, maybe. That’s cool. At the rate things are changing, it will be here before you know it.

The Global South does IT better

A few weeks ago I visited the offices of Peru’s Comité Coordinador de la Infraestructura de Datos Espaciales del Perú, or IDEP, who are responsible for building that nation’s spatial data infrastructure system.

They have built a very impressive system with comparatively few resources using a largely open source stack of software–MapServer, MapBender, Mapfish, GeoNetwork, Joomla–and are actively looking for ways to innovate further.

In a meeting there, Max Taico, from the National Office of Electronic Government, explained why they had turned to open source software. It wasn’t just the fact that it was free–ESRI gives them free licenses of ArcGIS Server.

Open source software works for them because their government procurement practices are slow and hard to work with. But with free software (‘software libre’, as they call it), they are able to just install things on a server and get it to work. Indeed, while we were there they logged us onto the server and invited us to look around at the system and install new software if we thought it would be helpful.

Compared to the heavy bureaucracies we are used to working with, it would be an understatement to call this “refreshing.” Governments (including international governments) based in the U.S. maintain strict control over their software inventory and often stipulate what software is or is not allowed on their computers.

This is a crippling policy in a world full of great free software. It’s appalling to guess how much time (and, hence, money) is wasted by, say, the World Bank’s commitment to using outdated browser and office software.

Meanwhile, in Lima, a project with almost no permanent staff working on it was able to develop a system that is truly cutting edge. Their government IT culture there works contiguously with the global hacker culture, which is interested in getting things done with as few obstacles as possible.

An inspiring thought is that because this way of doing things is so much more effective, the Global North is learning that it should change its ways. In her keynote address to this week’s Understanding Risk conference, the World Bank’s CIO Shelley Leibowitz announced to an applauding audience that they were going to drop their mandated use of the universally loathed Lotus Notes.

Times are changing. It’s nice to know that part of that change is a long-due change in leadership.

Culture shock

I have the privilege of attending FOSS4G 2008 (Free and Open Source Software for Geospatial) in Cape Town this year as an engineer for OpenGeo.  This is my first time attending a technology conference, and so came with few expectations.  But what I had gathered from colleagues who have attended in the past was this conference is primarily for hackers and open source entrepreneurs who are committed to the free software paradigm and bringing it to the GIS world.  The event is put on by OSGeo, which is unguarded about its goal to piss off ESRI, the monopolistic proprietary GIS giant who we believe misserves their costumers and, indirectly, the general public. (Author’s note: Please see comments below and retraction, here.)

So far, most of the people I have met are coming to the conference from this angle, and it creates an exciting atmosphere.  What I didn’t understand until today was that there are other major groups attending FOSS4G this year.

The reason why FOSS4G is being held in South Africa this year is because FOSS4G is being co-sponsored this year by GISSA, the Geo- Information Society of South Africa.  They have contributed to an otherwise technical conference a humanitarian focus.  The first few talks given today were sober ones about the crises of developing nations, beginning with the health and crime problems in Cape Town itself.  The theme of the conference is oddly cautious: “Open Source Geospatial: An Option for Deveoping Nations.”  GIS professionals from government and NGO’s have been invited from developing countries around the world, with a couple hundred from South Africa itself.

The result is a strange cultural mix.  The FOSS crowd is lively, reliably laughing and applauding when a speaker makes a dig at proprietary software (PowerPoint, Internet Explorer, Apple).  Their speeches are deliberately humorous and irreverent.  After Ed Parsons gave a rather cluelessly untargeted talk about how Google’s (proprietary) products are awesome and how easy it is for people ot use them to make (proprietary) data, the crowd dragged him over the coals during the Q&A.

The government and GIS groups must find this strange.  Their tone was consistently more serious, more cautious, and less confrontational.  The pace of their presentations was slower.  They presented their tragic facts and their strategies to overcome them without the exuberance and confidence that this was their time to rally.

The point of bringing these two groups together is so that groups like GISSA can evaluate the appropriateness of geospatial FOSS for their very serious needs.  In many ways it’s great that they can see the FOSS developers in their element, since the transparency of the open source process and the enthusiasm of its participants is one of the software’s selling points.  But on the other hand, I worry that the two groups are speaking different languages.  I’ll be interested to see whether there’s any convergence by the end of the week.

Web class campaign finance

Sean Tevis, journalist-turned-information-architect, is running for Kansas State Representative for District 15.  Brilliantly, he posted this webcomic about his campaign in the style of XKCD, asking for donations to reach his goal of raising $26,000.  Last Wednesday, it hit Boing Boing.  Shortly thereafter, the web site was down due to mass traffic.  By two days later, the donations far exceeded his target, and people across the country are following his progress.

Guys like Paul Newell should learn from this guy about how to run an intern et campaign!  So what’s his secret?

A simplistic answer would just be that Tevis “understands the internet.”  He understands the power of an honest, witty, conversational blog.  He knows that people on the internet will self-organize around a good cause if it appeals to them.  This explanation totally ignores the mechanism of his success though.

Tevis’ campaign funding is ‘grassroots,’ but grassroots campaign financing works by harnessing class or identity interests.  Obama’s grassroots funding comes largely from the disposable income of his wine-track supporters.  Tevis’ funding comes from a narrower base.  It comes from readers of Boing Boing.  It comes from people who are turned on by an homage to XKCD.

Sociologist Manuel Castells has argued that as governments lose the ability to provide for the needs of their citizens, people will organize around other, non-national identities that give their lives meaning.  Somtimes these identities are tied to a particular region, like the Basque ethnic identity. But other identities, like the global feminist movement, and radical Islam, are indifferent to regional and state boundaries.

Tevis’ campaign funding illustrates the mobilization of the bearers of a new identity like these others–the identity shared by lots of the people who are active in the most forward-point parts of the web.  There is a strong culture there, with its own communicative style, aesthetic sensibility, and core politics.  I will call the bearers of this culture the ‘web class’ (although I don’t love the term and welcome alternative suggestions).

Don’t believe me?  Perhaps you think that the majority of the donors were rallying around a general progressive agenda, accessible to all?  I think the title of Cory Doctorow’s explosive shout out says it all:

Progressive geek looking for 3,000 people to help him win Kansas election against dinosauric anti-science/pro-surveillance dude

Yes, progressivism gets a mention.  But the clinching trifecta is:

  • Tevis is pro-science.  The web class loves science, because they know the internet owes everything to science and see the improvements science can make in their lives each day.
  • Tevis is anti-surveillance.  The web class is sensative to issues of surveillance and privacy because their day-to-day life is both highly exposed and at risk of digital attack.  The web class is constantly renegotiating what is public or private, and is loathe to lose control over that aspect of their lives.
  • Tevis is a geek.  “Geek” is entirely an identity label, that denotes a shared outlook of creative practicality, as well as an independence from/rejection by “the mainstream.”  The web class is largely constituted by geeks, and in this context the label is an honorific:  “He is one of us.”

Like Obama’s supporters, the web class is made up largely of young professionals and students who can spend their parents’ money.  I’m pretty sure a subset of them were what kept Ron Paul’s campaign alive for so long.  In addition, because geography is comparatively irrelvant to the web, it is just as irrelevant to web class politics.  (Several potential donors to Tevis’ campaign–for a Kansas state government position–were legally unable to because they weren’t U.S. citizens.)  This makes them an excellent base for remotely financing elections.  And if this sort of thing keeps up, then the web class will have some serious political clout across U.S. for the years to come.

Is this a good thing?

I’m ambivalent.  On principle, I object to the heavy role of money in politics, even if that money is ‘grassroots.’  In this case, the fact that most of Tevis’ donors are likely from out of state gives me additional worry.  On the other hand, I appreciate Tevis’ politics, and believe that, for example, the project of science and scientific education is one that transcends and supercedes the project of democratic legitimacy.  Part of me feels strongly that the web class should not hesitate to take politics into its own hands.  I will likely donate to his campaign anyway.  What do you think? Comments are very welcome.