Digifesto

Category: social software

a constitution written in source code

Suppose we put aside any apocalyptic fears of instrumentality run amok, make peace between The Two Cultures of science and the humanities, and suffer gracefully the provocations of the critical without it getting us down.

We are left with some bare facts:

  • The size and therefore the complexity of society is increasing all the time.
  • Managing that complexity requires information technology, and specifically technology for computation and its various interfaces.
  • The information processing already being performed by computers in the regulation and control of society dwarfs anything any individual can accomplish.
  • While we maintain the myth of human expertise and human leadership, these are competitive only when assisted to a great degree by a thinking machine.
  • Political decisions, in particular, either are already or should be made with the assistance of data processing tools commensurate with the scale of the decisions being made.

This is a description of the present. To extrapolate into the future, there is only a thin consensus of anthropocentrism between us and the conclusion that we do not so much govern machines as much as they govern us.

This should not shock us. The infrastructure that provides us so much guidance and potential in our daily lives–railroads, electrical wires, wifi hotspots, satellites–is all of human design and made in service of human interests. While these design processes were never entirely democratic, we have made it thus far with whatever injustices have occurred.

We no longer have the pretense that making governing decisions is the special domain of the human mind. Concerns about the possibly discriminatory power of algorithms concede this point. So public concern now scrutinizes the private companies whose software systems make so many decisions for us in ways that are obscure or unpredictable. The profit motive, it is suspected, will not serve customers of these services well in the long run.

So far policy-makers have taken a passive stance towards the problem of algorithmic control by reacting to violations of human dignity with a call for human regulation.

What is needed is a more active stance.

Suppose we were to start again in founding a new city. Or a new nation. Unlike the founders of every city ever founded, we have the option to write its founding Constitution in source code. It would be logically precise and executable without expensive bureaucratic apparatus. It would be scalable in ways that can be mathematically confirmed. It could be forked, experimented with, by diverse societies across the globe. Its procedure for amendment would be written into itself, securing democracy by protocol design.

responding to @npdoty on ethics in engineering

Nick Doty wrote a thorough and thoughtful response to my earlier post about the Facebook research ethics problem, correcting me on a number of points.

In particular, he highlights how academic ethicists like Floridi and Nissenbaum have an impact on industry regulation. It’s worth reading for sure.

Nick writes from an interesting position. Since he works for the W3C himself, he is closer to the policy decision makers on these issues. I think this, as well as his general erudition, give him a richer view of how these debates play out. Contrast that with the debate that happens for public consumption, which is naturally less focused.

In trying to understand scholarly work on these ethical and political issues of technology, I’m struck by how differences in where writers and audiences are coming from lead to communication breakdown. The recent blast of popular scholarship about ‘algorithms’, for example, is bewildering to me. I had the privilege of learning what an algorithm was fairly early. I learned about quicksort in an introductory computing class in college. While certainly an intellectual accomplishment, quicksort is politically quite neutral.

What’s odd is how certain contemporary popular scholarship seeks to introduce an unknowing audience to algorithms not via their basic properties–their pseudocode form, their construction from more fundamental computing components, their running time–but for their application in select and controversial contexts. Is this good for the public education? Or is this capitalizing on the vagaries of public attention?

My democratic values are being sorely tested by the quality of public discussion on matters like these. I’m becoming more content with the fact that in reality, these decisions are made by self-selecting experts in inaccessible conversations. To hope otherwise is to downplay the genuine complexity of technical problems and the amount of effort it takes to truly understand them.

But if I can sit complacently with my own expertise, this does not seem like a political solution. The FCC’s willingness to accept public comment, which normally does not elicit the response of a mass action, was just tested by Net Neutrality activists. I see from the linked article that other media-related requests for comments were similarly swamped.

The crux, I believe, is the self-referential nature of the problem–that the mechanics of information flow among the public are both what’s at stake (in terms of technical outcomes) and what drives the process to begin with, when it’s democratic. This is a recipe for a chaotic process. Perhaps there are no attractor or steady states.

Following Rash’s analysis of Habermas and Luhmann’s disagreement as to the fate of complex social systems, we’ve got at least two possible outcomes for how these debates play out. On the one hand, rationality may prevail. Genuine interlocutors, given enough time and with shared standards of discourse, can arrive at consensus about how to act–or, what technical standards to adopt, or what patches to accept into foundational software. On the other hand, the layering of those standards on top of each other, and the reaction of users to them as they build layers of communication on top of the technical edifice, can create further irreducible complexity. With that complexity comes further ethical dilemmas and political tensions.

A good desideratum for a communications system that is used to determine the technicalities of its own design is that its algorithms should intelligently manage the complexity of arriving at normative consensus.

This is truly unfortunate

This is truly unfortunate.

In one sense, this indicates that the majority of Facebook users have no idea how computers work. Do these Facebook users also know that their use of a word processor, or their web browser, or their Amazon purchases, are all mediated by algorithms? Do they understand that what computers do–more or less all they ever do–is mechanically execute algorithms?

I guess not. This is a massive failure of the education system. Perhaps we should start mandating that students read this well-written HowStuffWorks article, “What is a computer algorithm?” That would clear up a lot of confusion, I think.

How to tell the story about why stories don’t matter

I’m thinking of taking this seminar because I’m running into the problem it addresses: how do you pick a theoretical lens for academic writing?

This is related to a conversation I’ve found myself in repeatedly over the past weeks. A friend who studied Rhetoric insists that the narrative and framing of history is more important than the events and facts. A philosopher friend minimizes the historical impact of increased volumes of “raw footage”, because ultimately it’s the framing that will matter.

Yesterday I had the privilege of attending Techraking III, a conference put on by the Center for Investigative Reporting with the generous support and presence of Google. It was a conference about data journalism. The popular sentiment within the conference was that data doesn’t matter unless it’s told with a story, a framing.

I find this troubling because while I pay attention to this world and the way it frames itself, I also read the tech biz press carefully, and it tells a very different narrative. Data is worth billions of dollars. Even data exhaust, the data fumes that come from your information processing factory, can be recycled into valuable insights. Data is there to be mined for value. And if you are particularly genius at it, you can build an expert system that acts on the data without needing interpretation. You build an information processing machine that acts according to mechanical principles that approximate statistical laws, and these machines are powerful.

As social scientists realize they need to be data scientists, and journalists realize they need to be data journalists, there seems to be in practice a tacit admission of the data-driven counter-narrative. This tacit approval is contradicted by the explicit rhetoric that glorifies interpretation and narrative over data.

This is an interesting kind of contradiction, as it takes place as much in the psyche of the data scientist as anywhere else. It’s like the mouth doesn’t know what the hand is doing. This is entirely possible since our minds aren’t actually that coherent to start with. But it does make the process of collaboratively interacting with others in the data science field super complicated.

All this comes to a head when the data we are talking about isn’t something simple like sensor data about the weather but rather is something like text, which is both data and narrative simulatenously. We intuitively see the potential of treating narrative as something to be treated mechanically, statistically. We certainly see the effects of this in our daily lives. This is what the most powerful organizations in the world do all the time.

The irony is that the interpretivists, who are so quick to deny technological determinism, are the ones who are most vulnerable to being blindsided by “what technology wants.” Humanities departments are being slowly phased out, their funding cut. Why? Do they have an explanation for this? If interpetation/framing were as efficacious as they claim, they would be philosopher kings. So their sociopolitical situation contradicts their own rhetoric and ideology. Meanwhile, journalists who would like to believe that it’s the story that matters are, for the sake of job security, being corralled into classes to learn CSS, the programming language that determines, mechanically, the logic of formatting and presentation.

Sadly, neither mechanists nor interpretivists have much of an interest in engaging this contradiction. This is because interpretivists chase funding by reinforcing the narrative that they are critically important, and the work of mechanists speaks for itself in corporate accounting (an uninterpretive field) without explanation. So this contradiction falls mainly into the laps of those coordinating interaction between tribes. Managers who need to communicate between engineering and marketing. University administrators who have to juggle the interests of humanities and sciences. The leadership of investigative reporting non-profits who need to justify themselves to savvy foundations and who are removed enough from particular skillsets to be flexible.

Mechnanized information processing is becoming the new epistemic center. (Forgive me:) the Google supercomputer approximating statistics has replaced Kantian trancendental reason as the grounds for bourgious understanding of the world. This is threatening, of course, to the plurality of perspectives that do not themselves internalize the logic of machine learning. Where machine intelligence has succeeded, then, it has been by juggling this multitude of perspectives (and frames) through automated, data-driven processes. Machine intelligence is not comprehensible to lay interpretivism. Interestingly, lay interpetivism isn’t comprehensible yet to machine intelligence–natural language processing has not yet advanced so far. It treats our communications like we treat ants in an ant farm: a blooming buzzing confusion of arbitrary quanta, fascinatingly complex for its patterns that we cannot see. And when it makes mistakes–and it does often–we feel its effects as a structural force beyond our control. A change in the user interface of Facebook that suddenly exposes drunken college photos to employers and abusive ex-lovers.

What theoretical frame is adequate to tell this story, the story that’s determining the shape of knowledge today? For Lyotard, the postmodern condition is one in which metanarratives about the organization of knowledge collapse and leave only politics, power, and language games. The postmodern condition has gotten us into our present condition: industrial machine intelligence presiding over interpretivists battling in paralogical language games. When the interpretivists strike back, it looks like hipsters or Weird Twitter–paralogy as a subculture of resistance that can’t even acknowledge its own role as resistance for fear of recuperation.

We need a new metanarrative to get out of this mess. But what kind of theory could possibly satisfy all these constituents?

Aristotelian legislation and the virtual community

I dipped into Aristotle’s Politics today and was intrigued by William Ellis’ introduction.

Ellis claims that in Aristotle’s day, you would call on a legislator as an external consultant when you set about founding a new city or colony. There were a great variety of constitutions available to be studied. You would study them to become an expert in how to design a community’s laws. Classical political philosophy was part of the very real project of starting new communities supporting human flourishing.

We see a similar situation with on-line communities today. If cyberspace was an electronic frontier, it’s been bulldozed and is now a metropolis with suburbs and strip malls. But there is still innovation in on-line social life as social media infrastructure as users migrate between social networking services.

If Lessig is right and “code is law“, then the variety of virtual communities and the opportunity to found new ones renews the role of the Aristotelian legislator. We can ask questions like: should an on-line community be self-governing or run by an aristocracy? How can it sustain itself economically, or defend itself in (cyber-)wars? How can it best promote human flourishing? The arts? Justice?

It would be easy to trivialize these possibilities by noting that virtual life is not real life. But that would underestimate the shift that is occurring as economic and political engagement moves on-line. In recognition and anticipation of these changes, philosophy has a practical significance in comprehensive design.

Holy War on Kiva more fun than throwing virtual sheep

Poking around web-enabled microlending organization Kiva‘s website, something that stuck out immediately was the “Lending Teams” feature, which prominently shows which teams have been most involved in micro-financing.

There is a holy war going on between Christians and Atheists to prove who are the better people. Atheists are winning.

Kiva president Premal Shah explains the phenomenon. Lending teams make Kiva fun, because (by implication) trash talking your ideological enemies is fun.

This is important, Shah notes, if Kiva is competing primarily for people’s attention. Since a lot of microloans are paid back, the cost of participation (for people with enough liquidity) is negligible. So what prevents people from doing more microlending is that they are too preoccupied throwing virtual sheep at each other, for example.

One hopes that no matter whether the Atheists or Christians are right, Farmville burns in the End Times.

Breeding adaptations in the world of facebook

Commenters skb and Matt Cooperrider have asked for an example that justifies my claim in “Social Killer App” that “My generation has done back flips to meet the socialware demands of Facebook.” An example came up in a conversation i overheard on the subway yesterday.

Two women, apparently close friends, were discussing a man whom one had been involved with. This was a complicated relationship; it had been long distance for some time, and now they were closer together, but still he did not seem to have the time for her that she expected from him. Her mother had suggested that perhaps she was not the only woman in the man’s life, but there was no real evidence for that. He had told her that she deserves better, but still expressed interest in her.

“So what do you want?” asked the patient friend.

“Well,” said the other, “I guess what I want is…. Well, when I changed my Relationship Status [on Facebook], I took off that I was single. I didn’t say I was in any relationship or anything, but I’m not single. I’m committed to seeing where this thing goes. And I wish that he would do the same.”

Without being too glib in reading into this example, I think it demonstrates how today even very personal and subtle social relations get reified in social network technology, and how there is an admittedly heterogenous social expectation that one use those technologies in meaningful ways.

MIT Collaboratorium

Matt Cooperrider pointed me towards this YouTube video on MIT’s Center for Collective Intelligence Collaboratorium project:

In my opinion, their design is too centralized and too top-down; but I nevertheless give these folks a tremendous amount of credit, because I believe that a solution to the collaborative deliberation problem they are trying to solve could save the world. It could provide the technological foundation for a Habermasian’ ideal speech situation.  If done right–and MIT doesn’t seem far off from a great first step–it would be the social killer app.

Filtering feeds

About a week ago Subtraction made a long post complaining about the main problem of feed aggregators:

No matter how much I try to organize it, it’s always in disarray, overflowing with unread posts and encumbered with mothballed feeds. … The whole process frustrates me though, mostly because I feel like I shouldn’t have to do it at all. The software should just do it for me.

These are my reactions to this, roughly in order:

  • I feel the pain of feed bloat myself, and know many others that do. It’s another symptom of internet-enabled information explosion.
  • It’s amazing that we live in an era when a feeling of entitlement about our interactions with web technology isn’t seen as ridiculous outright. It’s true–it does feel surprising that somebody smart hasn’t solved this problem for everybody yet.
  • The reason why it hasn’t been solved yet is probably because it’s a tough problem. It’s not easy to program a computer to know What I Find Interesting…

…or is it? This is, after all, what various web services have fought to do well for us ever since the dawn of the search engine.  And the results are pretty good right now.  So there must be a good way to solve this problem.

As far as I can tell, there are two successful ways of doing smart filtering-for-people on the internet, both of which are being applied to feeds:

The most interesting solutions to these kinds of problems are collaborative filtering algorithms that combine both methods. This is why Gmail’s spam filter is so good: it uses the input of its gillions of users to collaborative train its algorithmic filter. StumbleUpon is probably my favorite implementation of this for general web content–although its closed-ness spooks me out.

We’re working on applying collaborative filtering methods to feeds at The Open Planning Project. Specifically, Luke Tucker has been developing Melkjug, an open source collaborative filtering feed aggregator. It’s currently in version 0.2.1. To get involved in the project, check out the Melkjug Project page on OpenPlans.org.

Social Killer App

The term “killer app” has come to mean any particularly kickass software. But originally, it had a more specific meaning: a killer app was “an application so compelling that someone will buy the hardware or software components necessary to run it.”

Today’s great web apps can no longer be said to run on chips alone. Google’s success as an application depends on the socially built network of links on the internet. Amazon and Ebay rely on user provided ratings and reviews. Wikipedia’s software is relatively simple; only an enduring community of contributors has made it the institution it is today. In each case, the success of the application is intimately tied to the behavior of its substrate of users. This is all commonplace knowledge now, as these were the Founding Fathers of the Web 2.0. What they and social software that has come after them prove is that today’s software applications run on both hardware and socialware. (Socioware? Soc(k)ware?)

Many people today have embraced the idea of using social software for social change. Normally, what they mean by this is that software can help people perform the traditional activities of reform–e.g. discussion, organization, advocacy, publicity. That idea is true and noble and becoming manifest as we speak.

But there is another way in which software can change society. The dependence of people on new technology and social technology on people makes possible the social killer app–an application so compelling that people will adopt the socialware necessary to use it.

This is already happening, of course. My generation has done back flips to meet the socialware demands of Facebook, for example. But there is no normatively backed agenda here; the revolutions necessary for Facebook’s success were accidental effects of a profit motive.

I dream of a piece of software that is both compelling and engineered such that its deployment demands the radical transformation of society for the better. And I don’t think this dream is far fetched or beyond us. At all.