The Facebook ethics problem is a political problem

by Sebastian Benthall

So much has been said about the Facebook emotion contagion experiment. Perhaps everything has been said.

The problem with everything having been said is that by an large people’s ethical stances seem predetermined by their habitus.

By which I mean: most people don’t really care. People who care about what happens on the Internet care about it in whatever way is determined by their professional orientation on that matter. Obviously, some groups of people benefit from there being fewer socially imposed ethical restrictions on data scientific practice, either in an industrial or academic context. Others benefit from imposing those ethical restrictions, or cultivating public outrage on the matter.

If this is an ethical issue, what system of ethics are we prepared to use to evaluate it?

You could make an argument from, say, a utilitarian perspective, or a deontological perspective, or even a virtue ethics standpoint. Those are classic moves.

But nobody will listen to what a professionalized academic ethicist will say on the matter. If there’s anybody who does rigorous work on this, it’s probably somebody like Luciano Floridi. His work is great, in my opinion. But I haven’t found any other academics who work in, say, policy that embrace his thinking. I’d love to be proven wrong.

But since Floridi does serious work on information ethics, that’s mainly an inconvenience to pundits. Instead we get heat, not light.

If this process resolves into anything like policy change–either governmental or internally at Facebook–it will because of a process of agonistic politics. “Agonistic” here means fraught with conflicted interests. It may be redundant to modify ‘politics’ with ‘agonistic’ but it makes the point that the moves being made are strategic actions, aimed at gain for ones person or group, more than they are communicative ones, aimed at consensus.

Because e.g. Facebook keeps public discussion fragmented through its EdgeRank algorithm, which even in its well-documented public version is full of apparent political consequences and flaws, there is no way for conversation within the Facebook platform to result in consensus. It is not, as has been observed by others, a public. In a trivial sense, it’s not a public because the data isn’t public. The data is (sort of) private. That’s not a bad thing. It just means that Facebook shouldn’t be where you go to develop a political consensus that could legitimize power.

Twitter is a little better for this, because it’s actually public. Facebook has zero reason to care about the public consensus of people on Twitter though, because those people won’t organize a consumer boycott of Facebook, because they can only reach people that use Twitter.

Facebook is a great–perhaps the greatest–example of what Habermas calls the steering media. “Steering,” because it’s how powerful entities steer public opinion. For Habermas, the steering media control language and therefore culture. When ‘mass’ media control language, citizens no longer use language to form collective will.

For individualized ‘social’ media that is arranged into filter bubbles through relevance algorithms, language is similarly controlled. But rather than having just a single commanding voice, you have the opportunity for every voice to be expressed at once. Through homophily effects in network formation, what you’d expect to see are very intense clusters of extreme cultures that see themselves as ‘normal’ and don’t interact outside of their bubble.

The irony is that the critical left, who should be making these sorts of observations, is itself a bubble within this system of bubbles. Since critical leftism is enacted in commercialized social media which evolves around it, it becomes recuperated in the Situationist sense. Critical outrage is tapped for advertising revenue, which spurs more critical outrage.

The dependence of contemporary criticality on commercial social media for its own diffusion means that, ironically, none of them are able to just quit Facebook like everyone else who has figured out how much Facebook sucks.

It’s not a secret that decentralized communication systems are the solution to this sort of thing. Stanford’s Liberation Tech group captures this ideology rather well. There’s a lot of good work on censorship-resistant systems, distributed messaging systems, etc. For people who are citizens in the free world, many of these alternative communication platforms where we are spared from algorithmic control are very old. Some people still use IRC for chat. I’m a huge fan of mailing lists, myself. Email is the original on-line social media, and ones inbox is ones domain. Everyone who is posting their stuff to Facebook could be posting to a WordPress blog. WordPress, by the way, has a lovely user interface these days and keeps adding “social” features like “liking” and “following”. This goes largely unnoticed, which is too bad, because Automattic, the company the runs WordPress, is really not evil at all.

So there are plenty of solutions to Facebook being bad for manipulative and bad for democracy. Those solutions involve getting people off of Facebook and onto alternative platforms. That’s what a consumer boycott is. That’s how you get companies to stop doing bad stuff, if you don’t have regulatory power.

Obviously the real problem is that we don’t have a less politically problematic technology that does everything we want Facebook to do only not the bad stuff. There are a lot of unsolved technical accomplishments to getting that to work. I think I wrote a social media think piece about this once.

I think a really cool project that everybody who cares about this should be working on is designing and executing on building that alternative to Facebook. That’s a huge project. But just think about how great it would be if we could figure out how to fund, design, build, and market that. These are the big questions for political praxis in the 21st century.