Digifesto

Tag: Habermas

late modern social epistemology round up; technical vs. hermeneutical correctness

Consider on the one hand what we might call Habermasian transcendental pragmatism, according to which knowledge can be categorized by how it addresses one of several generalized human interests:

  • The interest of power over nature or other beings, being technical knowledge
  • The interest of agreement with others for the sake of collective action, being hermeneutic knowledge
  • The interest of emancipation from present socially imposed conditions, being critical or reflexive knowledge

Consider in contrast what we might call the Luhmann or Foucault model in which knowledge is created via system autopoeisis. Luhmann talks about autopoeisis in a social system; Foucault talks about knowledge in a system of power much the same way.

It is difficult to reconcile these views. This may be what was at the heart of the Habermas-Luhmann debate. Can we parse out the problem in any way that helps reconcile these views?

First, let’s consider the Luhmann view. We might ease the tension in it by naming what we’ve called “knowledge” something like “belief”, removing the implication that the belief is true. Because indeed autopoeisis is a powerful enough process that it seems like it would preserve all kinds of myths and errors should they be important to the survival of the system in which they circulate.

This picture of knowledge, which we might call evolutionary or alternately historicist, is certainly a relativist one. At the intersection of institutions within which different partial perspectives are embedded, we are bound to see political contest.

In light of this, Habermas’s categorization of knowledge as what addresses generalized human interests can be seen as a way of identifying knowledge that transcends particular social systems. There is a normative component of this theory–knowledge should be such a thing. But there is also a descriptive component. One predicts, under Habermas’s hypothesis, that the knowledge that survives political contest at the intersection of social systems is that which addresses generalized interests.

Something I have perhaps overlooked in the past is the importance of the fact that there are multiple and sometimes contradictory general interests. One persistent difficulty in the search for truth is the conflict between what is technically correct and what is hermeneutically correct.

If a statement or theory is technically correct, then it can be reliably used by agents to predict and control the world. The objects of this prediction and control can be objects, or they can be other agents.

If a statement or theory is hermeneutically correct, then it is the reliable consensus of agents involved in a project of mutual understanding and respect. Hermeneutically correct beliefs might stress universal freedom and potential, a narrative of shared history, and a normative goal of progress against inequality. Another word for ‘hermeneutic’ might be ‘political’. Politically correct knowledges are those shared beliefs without which the members of a polity would not be able to stand each other.

In everyday discourse we can identify many examples of statements that are technically correct but hermeneutically (or politically) incorrect, and vice versa. I will not enumerate them here. In these cases, the technically correct view is identified as “offensive” because in a sense it is a defection from a voluntary social contract. Hermeneutic correctness binds together a particular social system by capturing what participants must agree upon in order for all to safely participate. For a member of that social system to assert their own agency over others, to identify ways in which others may be predicted and controlled without their consent or choice in the matter, is disrespectful. Persistent disrespect results in the ejection of the offender from the polity. (c.f. Pasquale’s distinction between “California engineers and New York quants” and “citizens”.)

A cruel consequence of these dynamics is social stratification based on the accumulation of politically forbidden technical knowledge.

We can tell this story again and again: A society is bound together by hermeneutically stable knowledge–an ideology, perhaps. Somebody ‘smart’ begins experimentation and identifies a technical truth that is hermeneutically incorrect, meaning that if the idea were to spread it would erode the consensus on which the social system depends. Perhaps the new idea degrades others by revealing that something believed to be an act of free will is, in fact, determined by nature. Perhaps the new idea is inaccessible to others because it depends on some rare capacity. In any case, it cannot be willfully consented to by the others.

The social system begins to have an immune reaction. Society has seen this kind of thing before. Historically, this idea has lead to abuse, exploitation, infamy. Those with forbidden knowledge should be shunned, distrusted, perhaps punished. Those with disrespectful technical ideas are discouraged from expressing them.

Technical knowledge thereby becomes socially isolated. Seeking out its own, it becomes concentrated. Already shunned by society, the isolated technologists put their knowledge to use. They gain advantage. Revenge is had by the nerds.

We need more Sittlichkeit: Vallier on Piketty and Rawls; Cyril on Surveillance and Democracy; Taylor on Hegel

Kevin Vallier’s critique of Piketty in Bleeding Heart Libertarians (funny name) is mainly a criticism of the idea that economic inequality leads to political stability.

In the course of his rebuttal of Piketty, he brings in some interesting Rawlsian theory which is more broadly important. He distinguishes between power stability, the stability of a state in maintaining itself due to its forcible prevention of resistance by Hobbesian power. “Inherent stability”, or moral stability (Vallier’s term) is “stability for the right reasons”–that comes from the state’s comportment with our sense of justice.

There are lots of other ways of saying the same think in the literature. We can ask if justice is de facto or de jure. We can distinguish, as does Hanah Arendt in On Violence, between power (which she maintains is only what’s rooted in collective action) and violence (which is I guess what Vallier would call ‘Hobbesian power’). In a perhaps more subtle move, we can with Habermas ask what legitimizes the power of the state.

The left-wing zeitgeist at the moment is emphasizing inequality as a problem. While Piketty argues that inequality leads to instability, it’s an open question whether this is in fact the case. There’s no particular reason why a Hobbesian sovereign with swarms of killer drones couldn’t maintain its despotic rule through violence. Probably the real cause for complaint is that this is illegitimate power (if you’re Habermas), or violence not power (if you’re Arendt), or moral instability (if you’re Rawls).

That makes sense. Illegitimate power is the kind of power that one would complain about.

Ok, so now cut to Malkia Cyril’s talk at CFP tying technological surveillance to racism. What better illustration of the problems of inequality in the United States than the history of racist policies towards black people? Cyril acknowledges the benefits of Internet technology in providing tools for activists but suspects that now technology will be used by people in power to maintain power for the sake of profit.

The fourth amendment, for us, is not and has never been about privacy, per se. It’s about sovereignty. It’s about power. It’s about democracy. It’s about the historic and present day overreach of governments and corporations into our lives, in order to facilitate discrimination and disadvantage for the purposes of control; for profit. Privacy, per se, is not the fight we are called to. We are called to this question of defending real democracy, not to this distinction between mass surveillance and targeted surveillance

So there’s a clear problem for Cyril which is that ‘real democracy’ is threatened by technical invasions of privacy. A lot of this is tied to the problem of who owns the technical infrastructure. “I believe in the Internet. But I don’t control it. Someone else does. We need a new civil rights act for the era of big data, and we need it now.” And later:

Last year, New York City Police Commissioner Bill Bratton said 2015 would be the year of technology for law enforcement. And indeed, it has been. Predictive policing has taken hold as the big brother of broken windows policing. Total information awareness has become the goal. Across the country, local police departments are working with federal law enforcement agencies to use advanced technological tools and data analysis to “pre-empt crime”. I have never seen anyone able to pre-empt crime, but I appreciate the arrogance that suggests you can tell the future in that way. I wish, instead, technologists would attempt to pre-empt poverty. Instead, algorithms. Instead, automation. In the name of community safety and national security we are now relying on algorithms to mete out sentences, determine city budgets, and automate public decision-making without any public input. That sounds familiar too. It sounds like Black codes. Like Jim Crow. Like 1963.

My head hurts a little as I read this because while the rhetoric is powerful, the logic is loose. Of course you can do better or worse at preempting crime. You can look at past statistics on crime and extrapolate to the future. Maybe that’s hard but you could do it in worse or better ways. A great way to do that would be, as Cyril suggests, by preempting poverty–which some people try to do, and which can be assisted by algorithmic decision-making. There’s nothing strictly speaking racist about relying on algorithms to make decisions.

So for all that I want to support Cyril’s call for ‘civil rights act for the era of big data’, I can’t figure out from the rhetoric what that would involve or what its intellectual foundations would be.

Maybe there are two kinds of problems here:

  1. A problem of outcome legitimacy. Inequality, for example, might be an outcome that leads to a moral case against the power of the state.
  2. A problem of procedural legitimacy. When people are excluded from the decision-making processes that affect their lives, they may find that to be grounds for a moral objection to state power.

It’s worth making a distinction between these two problems even though they are related. If procedures are opaque and outcomes are unequal, there will naturally be resentment of the procedures and the suspicion that they are discriminatory.

We might ask: what would happen if procedures were transparent and outcomes were still unequal? What would happen if procedures were opaque and outcomes were fair?

One last point…I’ve been dipping into Charles Taylor’s analysis of Hegel because…shouldn’t everybody be studying Hegel? Taylor maintains that Hegel’s political philosophy in The Philosophy of Right (which I’ve never read) is still relevant today despite Hegel’s inability to predict the future of liberal democracy, let alone the future of his native Prussia (which is apparently something of a pain point for Hegel scholars).

Hegel, or maybe Taylor in a creative reinterpretation of Hegel, anticipates the problem of liberal democracy of maintaining the loyalty of its citizens. I can’t really do justice to Taylor’s analysis so I will repeat verbatim with my comments in square brackets.

[Hegel] did not think such a society [of free and interchangeable individuals] was viable, that is, it could not commadn the loyalty, the minimum degree of discipline and acceptance of its ground rules, it could not generate the agreement on fundamentals necessary to carry on. [N.B.: Hegel conflates power stability and moral stability] In this he was not entirely wrong. For in fact the loyal co-operation which modern societies have been able to command of their members has not been mainly a function of the liberty, equality, and popular rule they have incorporated. [N.B. This is a rejection of the idea that outcome and procedural legitimacy are in fact what leads to moral stability.] It has been an underlying belief of the liberal tradition that it was enough to satisfy these principles in order to gain men’s allegiance. But in fact, where they are not partly ‘coasting’ on traditional allegiance, liberal, as all other, modern societies have relied on other forces to keep them together.

The most important of these is, of course, nationalism. Secondly, the ideologies of mobilization have played an important role in some societies, focussing men’s attention and loyalties through the unprecedented future, the building of which is the justification of all present structures (especially that ubiquitous institution, the party).

But thirdly, liberal societies have had their own ‘mythology’, in the sense of a conception of human life and purposes which is expressed in and legitimizes its structures and practices. Contrary to widespread liberal myth, it has not relied on the ‘goods’ it could deliver, be they liberty, equality, or property, to maintain its members loyalty. The belief that this was coming to be so underlay the notion of the ‘end of ideology’ which was fashionable in the fifties.

But in fact what looked like an end of ideology was only a short period of unchallenged reign of a central ideology of liberalism.

This is a lot, but bear with me. What this is leading up to is an analysis of social cohesion in terms of what Hegel called Sittlichkeit, “ethical life” or “ethical order”. I gather that Sittlichkeit is not unlike what we’d call an ideology or worldview in other contexts. But a Sittlichkeit is better than mere ideology, because Sittlichkeit is a view of ethically ordered society and so therefore is somehow incompatible with liberal atomization of the self which of course is the root of alienation under liberal capitalism.

A liberal society which is a going concern has a Sittlichkeit of its own, although paradoxically this is grounded on a vision of things which denies the need for Sittlickeiit and portrays the ideal society as created and sustained by the will of its members. Liberal societies, in other words, are lucky when they do not live up, in this respect, to their own specifications.

If these common meaning fail, then the foundations of liberal society are in danger. And this indeed seems as distinct possibility today. The problem of recovering Sittlichkeit, of reforming a set of institutions and practices with which men can identify, is with us in an acute way in the apathy and alienation of modern society. For instance the central institutions of representative government are challenged by a growing sense that the individual’s vote has no signficance. [c.f. Cyril’s rhetoric of alienation from algorithmic decision-making.]

But then it should not surprise us to find this phenomenon of electoral indifference referred to in [The Philosophy of Right]. For in fact the problem of alienation and the recovery of Sittlichkeit is a central one in Hegel’s theory and any age in which it is on the agenda is one to which Hegel’s though is bound to be relevant. Not that Hegel’s particular solutions are of any interest today. But rather that his grasp of the relations of man to society–of identity and alienation, of differentiation and partial communities–and their evolution through history, gives us an important part of the language we sorely ned to come to grips with this problem in our time.

Charles Taylor wrote all this in 1975. I’d argue that this problem of establishing ethical order to legitimize state power despite alienation from procedure is a perennial one. That the burden of political judgment has been placed most recently on the technology of decision-making is a function of the automation of bureaucratic control (see Beniger) and, it’s awkward to admit, my own disciplinary bias. In particular it seems like what we need is a Sittlichkeit that deals adequately with the causes of inequality in society, which seem poorly understood.

“Conflicting panaceas”; decapitation and dogmatism in cultural studies counterpublics

I’m still reading through Horkheimer’s Eclipse of Reason. It is dense writing and slow going. I’m in the middle of the second chapter, “Conflicting Panaceas”.

This chapter recognizes and then critiques a variety of intellectual stances of his contemporaries. Whereas in the first chapter Horkheimer takes aim at pragmatism, in this he concerns himself with neo-Thomism and positivism.

Neo-Thomism? Yes, that’s right. Apparently in 1947 one of the major intellectual contenders was a school of thought based on adapting the metaphysics of Saint Thomas Aquinas to modern times. This school of thought was apparently notable enough that while Horkheimer is generally happy to call out the proponents of pragmatism and positivism by name and call them business interest lapdogs, he chooses instead to address the neo-Thomists anonymously in a conciliatory footnote

This important metaphysical school includes some of the most responsible historians and writers of our day. The critical remarks here bear exclusively on the trend by which independent philosophical thought is being superseded by dogmatism.

In a nutshell, Horkheimer’s criticism of neo-Thomism is that it is that since it tries and fails to repurpose old ontologies to the new world, it can’t fulfill its own ambitions as an intellectual system through rigor without losing the theological ambitions that motivate it, the identification of goodness, power, and eternal law. Since it can’t intellectually culminate, it becomes a “dogmatism” that can be coopted disingenuously by social forces.

This is, as I understand it, the essence of Horkheimer’s criticism of everything: That for any intellectual trend or project, unless the philosophical project is allowed to continue to completion within it, it will have its brains slurped out and become zombified by an instrumentalist capitalism that threatens to devolve into devastating world war. Hence, just as neo-Thomism becomes a dogmatism because it would refute itself if it allowed its logic to proceed to completion, so too does positivism become a dogmatism when it identifies the truth with disciplinarily enforced scientific methods. Since, as Horkheimer points out in 1947, these scientific methods are social processes, this dogmatic positivism is another zombie, prone to fads and politics not tracking truth.

I’ve been struggling over the past year or so with similar anxieties about what from my vantage point are prevailing intellectual trends of 2014. Perversely, in my experience the new intellectual identities that emerged to expose scientific procedures as social processes in the 20th century (STS) and establish rhetorics of resistance (cultural studies) have been similarly decapitated, recuperated, and dogmatic. [see 1 2 3].

Are these the hauntings of straw men? This is possible. Perhaps the intellectual currents I’ve witnessed are informal expressions, not serious intellectual work. But I think there is a deeper undercurrent which has turned up as I’ve worked on a paper resulting from this conversation about publics. It hinges on the interpretation of an influential article by Fraser in which she contests Habermas’s notion of the public sphere.

In my reading, Fraser more or less maintains the ideal of the public sphere as a place of legitimacy and reconciliation. For her it is notably inequitable, it is plural not singular, the boundaries of what is public and private are in constant negotiation, etc. But its function is roughly the same as it is for Habermas.

My growing suspicion is that this is not how Fraser is used by cultural studies today. This suspicion began when Fraser was introduced to me; upon reading her work I did not find the objection implicit in the reference to her. It continued as I worked with the comments of a reviewer on a paper. It was recently confirmed while reading Chris Wisniewski’s “Digital Deliberation ?” in Critical Review, vol 25, no. 2, 2013. He writes well:

The cultural-studies scholars and critical theorists interested in diversifying participation through the Internet have made a turn away from this deliberative ideal. In an essay first published in 1990, the critical theorist Nancy Fraser (1999, 521) rejects the idealized model of bourgeois public sphere as defined by Habermas on the grounds that it is exclusionary by design. Because the bourgeois public sphere brackets hierarchies of gender, race, ethnicity, class, etc., Fraser argues, it benefits the interests of dominant groups by default through its elision of socially significant inequalities. Lacking the ability to participate in the dominant discourse, disadvantaged groups establish alternative “subaltern counterpublics”.

Since the ideal speech situation does not acknowledge the socially significant inequalities that generate these counterpublics, Fraser argues for a different goal: a model of participatory democracy in which intercultural communications across socially stratified groups occur in forums that do not elide differences but intead allow diverse multiple publics the opportunity to determine the concerns or good of the public as a whole through “discursive contestations.” Fraser approaches thes subgroups as identity publics and argues that culture and political debate are essentially power struggles among self-interested subgroups. Fraser’s ideas are similar to those prevalent in cultural studies (see Wisneiwski 2007 and 2010), a relatively young discipline in which her work has been influential.

Fraser’s theoretical model is inconsistent with studies of democratic voting behavior, which indicate that people tend to vote sociotropically, according to a perceived collective interest, and not in facor of their own perceived self-interest (e.g., Kinder and Kiewiet 1981). The argument that so-called “mass” culture excludes the interests of dominated groups in favor of the interests of the elites loses some of its valence if culture is not a site through which self-interested groups vie for their objective interests, but is rather a forum in which democratic citizens debate what constitutes, and the best way to achieve, the collective good. Diversification of discourse ceases to be an end in itself.”

I think Wisneiwski hits the nail on the head here, a nail I’d like to drive in farther. If culture is conceived of as consisting of the contests of self-interested identity groups, as this version of cultural studies does, then it will necessarily see itself as one of many self-interested identities. Cultural studies becomes, by its own logic, a counterpublic that exists primarily to advance its own interests.

But just like neo-Thomism, this positioning decapitates cultural studies by preventing it from intellectually confronting its own limitations. No identity can survive rigorous intellectual interrogation, because all identities are based on contingency, finitude, and trauma. Cultural studies adopt and repurpose historical rhetorics of liberation much like neo-Thomists adopted and repurposed historical metaphysics of Christianity. The obsolescence of these rhetorics, like the obsolescence of Thomistic metaphysics, is what makes them dangerous. The rhetoric that maintains its own subordination as a condition of its own identity can never truly liberate, it can only antagonize. Unable to intellectually realize its own purpose, it becomes purposeless and hence coopted and recuperated like other dogmatisms. In particular, it feeds into “the politicization of absolutely everything”, in the language of Ezra Klein’s spot-on analysis of GamerGate. Cultural studies is a powerful ideology because it turns culture into a field of perpetual rivalry with all the distracting drama of reality television. In so doing, it undermines deeper intellectual penetration into the structural conditions of society.

If cultural studies is the neo-Thomism of today, a dogmatist religious revival of the profound theology of the civil rights movement, perhaps it’s the theocratic invocation of ‘algorithms’ that is the new scientism. I would have more to say about it if it weren’t so similar to the old scientism.

Discourse theory of law from Habermas

There has been at least one major gap in my understanding of Habermas’s social theory which I’m just filling now. The position Habermas reaches towards the end of Theory of Communicative Action vol 2 and develops further in later work in Between Facts and Norms (1992) is the discourse theory of law.

What I think went on is that Habermas eventually gave up on deliberative democracy in its purest form. After a career of scholarship about the public sphere, the ideal speech situation, and communicative action–fully developing the lifeworld as the ground for legitimate norms–but eventually had to make a concession to “the steering media” of money and power as necessary for the organization of society at scale. But at the intersection between lifeworld and system is law. Law serves as a transmission belt between legitimate norms established by civil society and “system”; at it’s best it is both efficacious and legitimate.

Law is ambiguous; it can serve both legitimate citizen interests united in communicative solidarity. It can also serve strong powerful interests. But it’s where the action is, because it’s where Habermas sees the ability for lifeworld to counter-steer the whole political apparatus towards legitimacy, including shifting the balance of power between lifeworld and system.

This is interesting because:

  • Habermas is like the last living heir of the Frankfurt School mission and this is a mature and actionable view nevertheless founded in the Critical Theory tradition.
  • If you pair it with Lessig’s Code is Law thesis, you get a framework for thinking about how technical mediation of civil society can be legitimate but also efficacious. I.e., code can be legitimized discoursively through communicative action. Arguably, this is how a lot of open source communities work, as well as standards bodies.
  • Thinking about managerialism as a system of centralized power that provides a framework of freedoms within it, Habermas seems to be presenting an alternative model where law or code evolves with the direct input of civil stakeholders. I’m fascinated by where Nick Doty’s work on multistakeholderism in the W3C is going and think there’s an alternative model in there somewhere. There’s a deep consistency in this, noted a while ago (2003) by Froomkin but largely unacknowledged as far as I can tell in the Data and Society or Berkman worlds.

I don’t see in Habermas anything about funding the state. That would mean acknowledging military force and the power to tax. But this is progress for me.

References

Zurn, Christopher. “Discourse theory of law”, in Jurgen Habermas: Key Concepts, edited by Barbara Fultner

responding to @npdoty on ethics in engineering

Nick Doty wrote a thorough and thoughtful response to my earlier post about the Facebook research ethics problem, correcting me on a number of points.

In particular, he highlights how academic ethicists like Floridi and Nissenbaum have an impact on industry regulation. It’s worth reading for sure.

Nick writes from an interesting position. Since he works for the W3C himself, he is closer to the policy decision makers on these issues. I think this, as well as his general erudition, give him a richer view of how these debates play out. Contrast that with the debate that happens for public consumption, which is naturally less focused.

In trying to understand scholarly work on these ethical and political issues of technology, I’m struck by how differences in where writers and audiences are coming from lead to communication breakdown. The recent blast of popular scholarship about ‘algorithms’, for example, is bewildering to me. I had the privilege of learning what an algorithm was fairly early. I learned about quicksort in an introductory computing class in college. While certainly an intellectual accomplishment, quicksort is politically quite neutral.

What’s odd is how certain contemporary popular scholarship seeks to introduce an unknowing audience to algorithms not via their basic properties–their pseudocode form, their construction from more fundamental computing components, their running time–but for their application in select and controversial contexts. Is this good for the public education? Or is this capitalizing on the vagaries of public attention?

My democratic values are being sorely tested by the quality of public discussion on matters like these. I’m becoming more content with the fact that in reality, these decisions are made by self-selecting experts in inaccessible conversations. To hope otherwise is to downplay the genuine complexity of technical problems and the amount of effort it takes to truly understand them.

But if I can sit complacently with my own expertise, this does not seem like a political solution. The FCC’s willingness to accept public comment, which normally does not elicit the response of a mass action, was just tested by Net Neutrality activists. I see from the linked article that other media-related requests for comments were similarly swamped.

The crux, I believe, is the self-referential nature of the problem–that the mechanics of information flow among the public are both what’s at stake (in terms of technical outcomes) and what drives the process to begin with, when it’s democratic. This is a recipe for a chaotic process. Perhaps there are no attractor or steady states.

Following Rash’s analysis of Habermas and Luhmann’s disagreement as to the fate of complex social systems, we’ve got at least two possible outcomes for how these debates play out. On the one hand, rationality may prevail. Genuine interlocutors, given enough time and with shared standards of discourse, can arrive at consensus about how to act–or, what technical standards to adopt, or what patches to accept into foundational software. On the other hand, the layering of those standards on top of each other, and the reaction of users to them as they build layers of communication on top of the technical edifice, can create further irreducible complexity. With that complexity comes further ethical dilemmas and political tensions.

A good desideratum for a communications system that is used to determine the technicalities of its own design is that its algorithms should intelligently manage the complexity of arriving at normative consensus.

The Facebook ethics problem is a political problem

So much has been said about the Facebook emotion contagion experiment. Perhaps everything has been said.

The problem with everything having been said is that by an large people’s ethical stances seem predetermined by their habitus.

By which I mean: most people don’t really care. People who care about what happens on the Internet care about it in whatever way is determined by their professional orientation on that matter. Obviously, some groups of people benefit from there being fewer socially imposed ethical restrictions on data scientific practice, either in an industrial or academic context. Others benefit from imposing those ethical restrictions, or cultivating public outrage on the matter.

If this is an ethical issue, what system of ethics are we prepared to use to evaluate it?

You could make an argument from, say, a utilitarian perspective, or a deontological perspective, or even a virtue ethics standpoint. Those are classic moves.

But nobody will listen to what a professionalized academic ethicist will say on the matter. If there’s anybody who does rigorous work on this, it’s probably somebody like Luciano Floridi. His work is great, in my opinion. But I haven’t found any other academics who work in, say, policy that embrace his thinking. I’d love to be proven wrong.

But since Floridi does serious work on information ethics, that’s mainly an inconvenience to pundits. Instead we get heat, not light.

If this process resolves into anything like policy change–either governmental or internally at Facebook–it will because of a process of agonistic politics. “Agonistic” here means fraught with conflicted interests. It may be redundant to modify ‘politics’ with ‘agonistic’ but it makes the point that the moves being made are strategic actions, aimed at gain for ones person or group, more than they are communicative ones, aimed at consensus.

Because e.g. Facebook keeps public discussion fragmented through its EdgeRank algorithm, which even in its well-documented public version is full of apparent political consequences and flaws, there is no way for conversation within the Facebook platform to result in consensus. It is not, as has been observed by others, a public. In a trivial sense, it’s not a public because the data isn’t public. The data is (sort of) private. That’s not a bad thing. It just means that Facebook shouldn’t be where you go to develop a political consensus that could legitimize power.

Twitter is a little better for this, because it’s actually public. Facebook has zero reason to care about the public consensus of people on Twitter though, because those people won’t organize a consumer boycott of Facebook, because they can only reach people that use Twitter.

Facebook is a great–perhaps the greatest–example of what Habermas calls the steering media. “Steering,” because it’s how powerful entities steer public opinion. For Habermas, the steering media control language and therefore culture. When ‘mass’ media control language, citizens no longer use language to form collective will.

For individualized ‘social’ media that is arranged into filter bubbles through relevance algorithms, language is similarly controlled. But rather than having just a single commanding voice, you have the opportunity for every voice to be expressed at once. Through homophily effects in network formation, what you’d expect to see are very intense clusters of extreme cultures that see themselves as ‘normal’ and don’t interact outside of their bubble.

The irony is that the critical left, who should be making these sorts of observations, is itself a bubble within this system of bubbles. Since critical leftism is enacted in commercialized social media which evolves around it, it becomes recuperated in the Situationist sense. Critical outrage is tapped for advertising revenue, which spurs more critical outrage.

The dependence of contemporary criticality on commercial social media for its own diffusion means that, ironically, none of them are able to just quit Facebook like everyone else who has figured out how much Facebook sucks.

It’s not a secret that decentralized communication systems are the solution to this sort of thing. Stanford’s Liberation Tech group captures this ideology rather well. There’s a lot of good work on censorship-resistant systems, distributed messaging systems, etc. For people who are citizens in the free world, many of these alternative communication platforms where we are spared from algorithmic control are very old. Some people still use IRC for chat. I’m a huge fan of mailing lists, myself. Email is the original on-line social media, and ones inbox is ones domain. Everyone who is posting their stuff to Facebook could be posting to a WordPress blog. WordPress, by the way, has a lovely user interface these days and keeps adding “social” features like “liking” and “following”. This goes largely unnoticed, which is too bad, because Automattic, the company the runs WordPress, is really not evil at all.

So there are plenty of solutions to Facebook being bad for manipulative and bad for democracy. Those solutions involve getting people off of Facebook and onto alternative platforms. That’s what a consumer boycott is. That’s how you get companies to stop doing bad stuff, if you don’t have regulatory power.

Obviously the real problem is that we don’t have a less politically problematic technology that does everything we want Facebook to do only not the bad stuff. There are a lot of unsolved technical accomplishments to getting that to work.

I think a really cool project that everybody who cares about this should be working on is designing and executing on building that alternative to Facebook. That’s a huge project. But just think about how great it would be if we could figure out how to fund, design, build, and market that. These are the big questions for political praxis in the 21st century.

reflective data science: technical, practical and emancipatory interests?

As Cathryn Carson is currently my boss at Berkeley’s D-Lab, it seems like it behooves me to read her papers. Thankfully, we share an interest in Habermasian epistemology. Today I read her “Science as instrumental reason: Heidegger, Habermas, Heisenberg.”

Though I can barely do justice to the paper, I’ll try to summarize: it grapples with the history of how science became constructed as purely instrumental project (a mode of inquiry that perfects means without specifying particular ends) through the interactions between Heisenberg, the premier theoretical physicist of Germany at his time, and Heidegger, the great philosopher, and then later the response to Heidegger by Habermas.

Heisenberg, most famous perhaps for the Heisenberg Uncertainty Principle, was himself reflective on the role of the scientist within science, and identified the limits of the subject and measurement within physics. But far from surpassing an older metaphysical idea of the subject-object divide, this only entrenched the scientist further, according to Heidegger. This is because scientist qua scientist never encounters the world in a way that is not tied up in the scientific, technical mode and so elludes pure being. While that may simply mean that pure being is left for philosophers and scientists are allowed to go on with their instrumental project, this mode of inquiry becomes insufficient when scientists were called on to comment on nuclear proliferation policy.

Such policy decisions are questions of praxis, or practical action in the human (as opposed to natural) world. Habermas was concerned with the hermeneutic epistemology of praxis, as well as the critical epistemology of emancipation, which are more the purview of the social sciences. Habermas tends to segment these modes of inquiry from each other, without (as far as I’ve encountered so far) anticipating a synthesis.

In data science, we see the broadly positivist, statistical, analytic treatment of social data. In its commercial applications to sell ads or conduct high-speed trading, we could say on a first pass that the science serves the technical human interest: prediction and control for some unspecified end. But that would be misleading. The breadth of methodological options available to the data scientist mean the the methods are often very closely tailored to the particular ends and conditions of the project. Data science as a method is an instrument. But the results of commercial data science are by and large not nomological (identifying laws of human behavior), but rather an immediately applied idiography. Or, more than an applied idiography, data science provides a probabilistic profile of its diverse subjects–an electron cloud of possibilities that the commercial data scientist uses to steer behavior en masse.

Of course, the uncertainty principle applies here as well: the human subject reacts to being measured, has the potential to change direction when they see that they are being targetted with this ad or that.

Further complicating the picture is that the application of ‘social technology’ of commercially driven data science is praxis, albeit in an apolitical sense. Enmeshed in a thick and complex technological web, nevertheless showing an ad and having it be clicked on is a move in the game of social relations. It is a handshake between cyborgs. And so even commercial data science must engage in hermeneutics, if Habermas is correct. Natural language processing provides the uncomfortable edge case here: can we have a technology that accomplishes hermeneutics for us? Apparently so, if a machine can identify somebody’s interest in a product or service from their linguistic output.

Though jarring, this is easier to cope with intellectually if we see the hermeneutic agent as a socio-technical system, as opposed to a purely technical system. Cyborg praxis will includes statistical/technical systems made of wires and silicon, just as meatier praxis includes statistical/technical systems made of proteins and cartilege.

But what of emancipation? This is the least likely human interest to be advanced by commercial interests. If I’ve got my bearings right, the emancipatory interest in the (social) sciences comes from the critical theory tradition, perhaps best exemplified in German thought by the Frankfurt School. One is meant to be emancipated by such inquiry from the power of the capitalist state. What would it mean for there to be an emancipatory data science?

I was recently asked out of the blue in an email whether there were any organizations using machine learning and predictive analytics towards social justice interests. I was ashamed to say I didn’t know of any organizations doing that kind of work. It is hard to imagine what an emancipatory data science would look like. An education or communication about data scientific techniques might be emancipatory (I was trying to accomplish something like this with Why Weird Twitter, for what its worth), but that was a qualitative study, not a data scientific one.

Taking our cue from above, an emancipatory data science would have to use data science methods towards the human interest of emancipation. For this, we would need to use the methods to understand the conditions of power and dependency that bind us. Difficult as an individual, it’s possible that these techniques could be used to greater effect by an emancipatory sociotechnical organization. Such an organization would need to be concerned with its own autonomy as well as the autonomy of others.

The closest thing I can imagine to such a sociotechnical system is what Kelty describes as the recursive public: the loose coallition of open source developers, open access researchers, and others concerned with transforming their social, economic, and technical conditions for emancipatory ends. Happily, the D-Lab’s technical infrastructure team appears to be populated entirely by citizens of the recursive public. Though this is naturally a matter of minor controversy within the lab (its hard to convince folks who haven’t directly experienced the emancipatory potential of the movement of its value), I’m glad that it stands on more or less robust historical grounds. While the course I am co-teaching on Open Collaboration and Peer Production will likely not get into critical theory, I expect that the exposure to more emancipated communities of praxis will make something click.

What I’m going for, personally, is a synthetic science that is at once technical and engaged in emancipatory praxis.

MIT Collaboratorium

Matt Cooperrider pointed me towards this YouTube video on MIT’s Center for Collective Intelligence Collaboratorium project:

In my opinion, their design is too centralized and too top-down; but I nevertheless give these folks a tremendous amount of credit, because I believe that a solution to the collaborative deliberation problem they are trying to solve could save the world. It could provide the technological foundation for a Habermasian’ ideal speech situation.  If done right–and MIT doesn’t seem far off from a great first step–it would be the social killer app.