Digifesto

Category: philosophy

Political theories and AI

Through a few new emerging projects and opportunities, I’ve had reason to circle back to the topic of Artificial Intelligence and ethics. I wanted to jot down a few notes as some recent reading and conversations have been clarifying some ideas here.

In my work with Jake Goldenfein on this topic (published 2021), we framed the ethical problem of AI in terms of its challenge to liberalism, which we characterize in terms of individual rights (namely, property and privacy rights), a theory of why the free public market makes the guarantees of these rights sufficient for many social goods, and a more recent progressive or egalitarian tendency. We then discuss how AI technologies challenge liberalism and require us to think about post-liberal configurations of society and computation.

A natural reaction to this paper, especially given the political climate in the United States, is “aren’t the alternatives to liberalism even worse?” and it’s true that we do not in that paper outline an alternative to liberalism which a world with AI might aspire to.

John Mearsheimer’s The Great Delusion: Liberal Dreams and International Realities (2018) is a clearly written treatise on political theory. Mearsheimer rose to infamy in 2022 after the Russian invasion of Ukraine because of widely circulated videos of a lecture in 2015 in which he argued that the fault for Russia’s invasion of Crimea in 2014 was due to U.S. foreign policy. It is because of that infamy that I’ve decided to read The Great Delusion, which was a Financial Times Best Book of 2018. The Financial Times editorials have since turned on Mearsheimer. We’ll see what they say about him in another four years. However politically unpopular he may be, I found his points interesting and have decided to look at his more scholarly work. I have not been disappointed, and find that he clearly articulates political philosophy I will use these articulations. I won’t analyze his international relations theory here.

Putting Mearsheimer’s international relations theories entirely aside for now, I’ve been pleased to find The Great Delusion to be a thorough treatise on political theory, and it goes to lengths in Chapter 3 to describe liberalism as a political theory (which will be its target). Mearsheimer distinguished between four different political ideologies, citing many of their key intellectual proponents.

  • Modus vivendi liberalism. (Locke, Smith, Hayek) A theory committed to individual negative rights, such as private property and privacy, against the impositions by the state. The state should be minimal, a “night watchman”. This can involve skepticism about the ability of reason to achieve consensus about the nature of the good life; political toleration of differences is implied by the guarantee of negative rights.
  • Progressive liberalism. (Rawls) A theory committed to individual rights, including both negative rights and positive rights, which can be in tension. An example positive right is equal opportunity, which requires interventions by the state in order to guarantee. So the state must play a stronger role. Progressive liberalism involves more faith in reason to achieve consensus about the good life, as progressivism is a positive moral view imposed on others.
  • Utilitarianism. (Bentham, Mill) A theory committed to the greatest happiness for the greatest number. Not committed to individual rights, and therefore not a liberalism per se. Utilitarian analysis can argue for tradeoffs of rights to achieve greater happiness, and is collectivist, not in individualist, in the sense that it is concerned with utility in aggregate.
  • Liberal idealism. (Hobson, Dewey) A theory committed to the realization of an ideal society as an organic unity of functioning subsystem. Not committed to individual rights primarily, so not a liberalism, though individual rights can be justified on ideal grounds. Influenced by Hegelian views about the unity of the state. Sometimes connected to a positive view of nationalism.

This is a highly useful breakdown of ideas, which we can bring back to discussions of AI ethics.

Jake Goldenfein and I wrote about ‘liberalism’ in a way that, I’m glad to say, is consistent with Mearsheimer. We too identity right- and left- wing strands of liberalism. I believe our argument about AI’s challenge to liberal assumptions still holds water.

Utilitarianism is the foundation of one of the most prominent versions of AI ethics today: Effective Altruism. Much has been written about Effective Altruism and its relationship to AI Safety research. I have expressed some thoughts. Suffice it to say here that there is a utilitarian argument that ‘ethics’ should be about prioritizing the prevention of existential risk to humanity, because existential catastrophe would prevent the high-utility outcome of humanity-as-joyous-galaxy-colonizers. AI is seen, for various reasons, to be a potential source of catastrophic risk, and so AI ethics is about preventing these outcomes. Not everybody agrees with this view.

For now, it’s worth mentioning that there is a connection between liberalism and utilitarianism through theories of economics. While some liberals are committed to individual rights for their own sake, or because of negative views about the possibility of rational agreement about more positive political claims, others have argued that negative rights and lack of government intervention lead to better collective outcomes. Neoclassical economics has produced theories and ‘proofs’ to this effect, which rely on mathematical utility theory, which is a successor to philosophical utilitarianism in some respects.

It is also the case that a great deal of AI technology and technical practice is oriented around the vaguely utilitarian goals of ‘utility maximization’, though this is more about the mathematical operationalization of instrumental reason and less about a social commitment to utility as a political goal. AI practice and neoclassical economics are quite aligned in this way. If I were to put the point precisely, I’d say that the reality of AI, by exposing bounded rationality and its role in society, shows that arguments that negative rights are sufficient for utility-maximizing outcomes are naive, and so are a disappointment for liberals.

I was pleased that Mearsheimer brought up what he calls ‘liberal idealism’ in his book, despite it being perhaps a digression from his broader points. I have wondered how to place my own work, which draws heavily on Helen Nissenbaum’s theory of Contextual Integrity (CI), which is heavily influenced by the work of Michael Walzer. CI is based on a view of a society composed of separable spheres, which distinct functions and internally meaningful social goods, which should not be directly exchanged or compared. Walzer has been called a communitarian. I suggest that CI might be best seen as a variation of liberal idealism, in that it orients ethics towards a view of society as an idealized organic unity.

If the present reality of AI is so disappointing, then we must try to imagine a better ideal, and work our way towards it. I’ve found myself reading more and more work, such as by Felix Adler and Alain Badiou, that advocate for the need for an ideal model of society. What we currently are missing is a good computational model of such a society which could do for idealism what neoclassical economics did for liberalism. Which is, namely, to create a blueprint for a policy and science of its realization. If we were to apply AI to the problem of ethics, it would be good to use it this way.

About ethics and families

Most of the great historical philosophers did not have children.

I can understand why. For much of my life, I’ve been propelled by a desire to understand certain theoretical fundamentals of knowledge, ethics, and the universe. No doubt this has led me to become the scientist I am today. Since becoming a father, I have less time for these questions. I find myself involved in more mundane details of life, and find myself beginning to envy those in what I had previously considered the most banal professions. Fatherhood involves a practical responsibility that comes front-and-center, displacing youthful ideals and speculations.

I’m quite proud to now be working on what are for me rather applied problems. But these problems have deep philosophical roots and I enjoy the thought that I will one day be able to write a mature philosophy as a much older man some time later. For now, I would like to jot down a few notes about how my philosophy has changed.

I write this now because my work is now intersecting with other research done by folks I know are profoundly ethically motivated people. My work on what is prosaically called “technology policy” is crossing into theoretical territory currently occupied by AI Safety researchers of the rationalist or Effective Altruist vein. I’ve encountered these folks before and respect their philosophical rigor, though I’ve never quite found myself in agreement with them. I continue to work on problems in legal theory as well, which always involves straddling the gap between consequentialism and deontological ethics. My more critical colleagues may be skeptical of my move towards quantitative economic methods, as the latter are associated with a politics that has been accused of lacking integrity. In short, I have several reasons to want to explain, to myself at least, why I’m working on the problems I’ve chosen, at least as a matter of my own philosophical trajectory.

So first, a point about logic. The principle of non-contradiction imposes a certain consistency and rigor on thought and encourages a form of universalism of theory and ethics. The internal consistency of the Kantian transcendental subject is the first foundation for deontological ethics. However, for what are essentially limitations of bounded rationality, this gives way in later theory to Habermasian discourse ethics. The internal consistency of the mind is replaced with the condition that to be involved in communicative action is to strive for agreement. Norms form from disinterested communications that collect and transcend the perspectival limits of the deliberators. In theory.

In practice, disinterested communication is all but impossible, and communicative competence is hard to find. At the time of this writing, my son does not yet know how to talk. But he communicates, and we do settle on norms, however transitory. The other day we established that he is not allowed to remove dirt from the big pot with the ficus elastica and deposit in other rooms of the house. This is a small accomplishment, but it highlights how unequal rationality, competence, and authority is not a secondary social aberration. It is a primary condition of life.

So much for deontology. Consequential ethics does not fare much better. Utility has always been a weakly theorized construct. In modern theory, it has been mathematized into something substantively meaningless. It serves mainly to describe behavior, rather than to explain it; it provides little except a just-so-story for a consumerist society which is, sure enough, best at consuming itself. Attempts to link utility to something like psychological pleasure, as was done in the olden days, have bizarre conclusions. Parents are not as happy, studies say, as those without children. So why bother?

Nietzsche was a fierce critic of both Kantian deontological ethics and facile British utilitarianism. He argued that in the face of the absurdity of both systems, the philosopher had to derive new values from the one principle that they could not, logically, deny: life itself. He believed that a new ethics could be derived from the conditions of life, which for him was a process of overcoming resistance in pursuit of other (perhaps arbitrary) goals. Suffering, for Nietzsche, was not a blemish on life; rather, life is sacred enough to justify monstrous amounts of suffering.

Nietzsche went insane and died before he could finish his moral project. He didn’t have kids. If he had, maybe he would have come to some new conclusions about the basis for ethics.

In my humble opinion and limited experience thus far, fatherhood is largely about working to maintain the conditions of life for one’s family. Any attempt at universalism that does not extend to one’s own offspring is a practical contradiction when one considers how one was once a child. The biological chain of being is direct, immediate, and resource intensive in a way too little acknowledged in philosophical theory.

In lieu of individual utility, the reality of family highlights the priority of viability, or the capacity of a complex, living system to maintain itself and its autonomy over time. The theory of viability was developed in the 20th century through the field of cybernetics — for example, by Stafford Beer — though it was never quite successfully formulated or integrated into the now hegemonic STEM disciplines. Nevertheless, viability provides a scientific criterion by which to evaluate social meaning and ethics. I believe that there is still tremendous potential in cybernetics as an answer to longstanding philosophical quandaries, though to truly capture this value certain mathematical claims need to be fleshed out.

However, an admission of the biological connection between human beings cannot eclipse economic realities that, like it or not, have structured human life for thousands of years. And indeed, in these early days of child-rearing, I find myself ill-equipped to address all of my son’s biological needs relative to my wife and instead have a comparative advantage in the economic aspects of his, our, lives. And so my current work, which involves computational macroeconomics and the governance of technology, is in fact profoundly personal and of essential ethical importance. Economics has a reputation today for being a technical and politically compromised discipline. We forget that it was originally, and maybe still is, a branch of moral philosophy deeply engaged with questions of justice precisely because it addresses the conditions of life. This ethical imperative persists despite, or indeed because of, its technical complexity. It may be where STEM can address questions of ethics directly. If only it had the right tools.

In summary, I see promise in the possibility of computational economics, if inspired by some currently marginalized ideas from cybernetics, in satisfactorily addressing some perplexing philosophical questions. My thirsting curiosity, at the very least, is slaked by daily progress along this path. I find in it the mathematical rigor I require. At the same time, there is space in this work for grappling with the troublingly political, including the politics of gender and race, which are both of course inexorably tangled with the reality of families. What does it mean, for the politics of knowledge, if the central philosophical unit and subject of knowledge is not the individual, or the state, or the market, but the family? I have not encountered even the beginning of an answer in all my years of study.

Hildebrandt (2013) on double contingency in Parsons and Luhmann

I’ve tried to piece together double contingency before, and am finding myself re-encountering these ideas in several projects. I just now happened on this very succinct account of double contingency in Hildebrandt (2013), which I wanted to reproduce here.

Parsons was less interested in personal identity than in the construction of social institutions as proxies for the coordination of human interaction. His point is that the uncertainty that is inherent in the double contingency requires the emergence of social structures that develop a certain autonomy and provide a more stable object for the coordination of human interaction. The circularity that comes with the double contingency is thus resolved in the consensus that is consolidated in sociological institutions that are typical for a particular culture. Consensus on the norms and values that regulate human interaction is Parsons’s solution to the problem of double contingency, and thus explains the existence of social institutions. As could be expected, Parsons’s focus on consensus and his urge to resolve the contingency have been criticized for its ‘past-oriented, objectivist and reified concept of culture’, and for its implicitly negative understanding of the double contingency.

This paragraph says a lot, both about “the problem” posed by “the double contingency”, the possibility of solution through consensus around norms and values, and the rejection of Parsons. It is striking that in the first pages of this article, Hildebrandt begins by challenging “contextual integrity” as a paradigm for privacy (a nod, if not a direct reference, to Nissenbaum (2009)), astutely pointing out that this paradigm makes privacy a matter of delinking data so that it is not reused across contexts. Nissenbaum’s contextual integrity theory depends rather critically on consensus around norms and values; the appropriateness of information norms is a feature of sociological institutions accountable ultimately to shared values. The aim of Parsons, and to some extent also Nissenbaum, is to remove the contingency by establishing reliable institutions.

The criticism of Parsons as being ‘past-oriented, objectivist and reified’ is striking. It opens the question whether Parsons’s concept of culture is too past-oriented, or if some cultures, more than others, may be more past-oriented, rigid, or reified. Consider a continuum of sociological institutions ranging from the rigid, formal, bureaucratized, and traditional to the flexible, casual, improvisational, and innovative. One extreme of these cultures is better conceptualized as “past-oriented” than the other. Furthermore, when cultural evolution becomes embedded in infrastructure, no doubt that culture is more “reified” not just conceptually, but actually, via its transformation into durable and material form. That Hildebrandt offers this criticism of Parsons perhaps foreshadows her later work about the problems of smart information communication infrastructure (Hildebrandt, 2015). Smart infrastructure poses, to those which this orientation, a problem in that it reduces double contingency by being, in fact, a reification of sociological institutions.

“Reification” is a pejorative word in sociology. It refers to a kind of ideological category error with unfortunate social consequences. The more positive view of this kind of durable, even material, culture would be found in Habermas, who would locate legitimacy precisely in the process of consensus. For Habermas, the ideals of legitimate consensus through discursively rational communicative actions finds its imperfect realization in the sociological institution of deliberative democratic law. This is the intellectual inheritor of Kant’s ideal of “perpetual peace”. It is, like the European Union, supposed to be a good thing.

So what about Brexit, so to speak?

Double contingency returns with a vengeance in Luhmann, who famously “debated” Habermas (a more true follower of Parsons), and probably won that debate. Hildebrandt (2013) discusses:

A more productive understanding of double contingency may come from Luhmann (1995), who takes a broader view of contingency; instead of merely defining it in terms of dependency he points to the different options open to subjects who can never be sure how their actions will be interpreted. The uncertainty presents not merely a problem but also a chance; not merely a constraint but also a measure of freedom. The freedom to act meaningfully is constraint [sic] by earlier interactions, because they indicate how one’s actions have been interpreted in the past and thus may be interpreted in the future. Earlier interactions weave into Luhmann’s (1995) emergent social systems, gaining a measure of autonomy — or resistance — with regard to individual participants. Ultimately, however, social systems are still rooted in double contingency of face-to-face communication. The constraints presented by earlier interactions and their uptake in a social system can be rejected and renegotiated in the process of anticipation. By figuring out how one’s actions are mapped by the other, or by social systems in which one participates, room is created to falsify expectations and to disrupt anticipations. This will not necessarily breed anomy, chaos or anarchy, but may instead provide spaces for contestation, self-definition in defiance of labels provided by the expectations of others, and the beginnings of novel or transformed social institutions. As such, the uncertainty inherent in the double contingency defines human autonomy and human identity as relational and even ephemeral, always requiring vigilance and creative invention in the face of unexpected or unreasonably constraining expectations.

Whereas Nissenbaum’s theory of privacy is “admitted conservative”, Hildebrandt’s is grounded in a defense of freedom, invention, and transformation. If either Nissenbaum or Hildebrandt were more inclined to contest each other directly, this may be privacy scholarship’s equivalent of the Habermas/Luhmann debate. However, this is unlikely to occur because the two scholars operate in different legal systems, reducing the stakes of the debate.

We must assume that Hildebrandt, in 2013, would have approved of Brexit, the ultimate defiance of labels and expectations against a Habermasian bureaucratic consensus. Perhaps she also, as would be consistent with this view, has misgivings about the extraterritorial enforcement of the GDPR. Or maybe she would prefer a a global bureaucratic consensus that agreed with Luhmann; but this is a contradiction. This psychologistic speculation is no doubt unproductive.

What is more productive is the pursuit of a synthesis between these poles. As a liberal society, we would like our allocation of autonomy; we often find ourselves in tension with the the bureaucratic systems that, according to rough consensus and running code, are designed to deliver to us our measure of autonomy. Those that overstep their allocation of autonomy, such as those that participated in the most recent Capitol insurrection, are put in prison. Freedom cooexists with law and even order in sometimes uncomfortable ways. There are contests; they are often ugly at the time however much they are glorified retrospectively by their winners as a form of past-oriented validation of the status quo.

References

Hildebrandt, M. (2013). Profile transparency by design?: Re-enabling double contingency. Privacy, due process and the computational turn: The philosophy of law meets the philosophy of technology, 221-46.

Hildebrandt, M. (2015). Smart technologies and the end (s) of law: novel entanglements of law and technology. Edward Elgar Publishing.

Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

double contingency and technology

One of the best ideas to come out of the social sciences is “double contingency”: the fact that two people engaged in communication are in a sense unpredictable to each other. That mutual unpredictability is an element of what it means to be in communication with another.

The most recent articulation of this idea is from Luhmann, who was interested in society as a system of communication. Luhmann is not focused on the phenomenology of the participants in a social system; in as sense, he looks like social systems the way an analyst might look at communications data from a social media site. The social system is the set of messages. Luhmann is an interesting figure in intellectual history in part because he is the one who made the work of Maturana and Varela officially part of German philosophical canon. That’s a big deal, as Maturana and Varela’s intellectual contributions–around the idea of autopoiesis, for example–were tremendously original, powerful, and good.

“Double contingency” was also discussed, one reads, by Talcott Parsons. This does not come up often because at some point the discipline of Sociology just decided to bury Parsons.

Double contingency comes up in interesting ways in European legal scholarship about technology. Luhmann, a dense German writer, is not read much in the United States, despite his being essentially right about things. Hildebrandt (2019) uses double contingency in her perhaps perplexingly framed argument for the “incomputability” of human personhood. Teubner (2006) makes a somewhat different but related argument about agency, double contingency, and electronic agents.

Hildebrandt and Teubner make for an interesting contrast. Hildebrandt is interested in the sanctity of humanity qua humanity, and in particular of privacy defined as the freedom to be unpredictable. This is an interesting inversion for European phenomenological philosophy. Recall that originally in European phenomenology human dignity was tied to autonomy, but autonomy depended on universalized rationality, with the implication that the most important thing about human dignity was that one followed universal moral rules (Kant). Hildebrandt is almost staking out an opposite position: that Arendtian natality, the unpredictableness of being an original being at birth, is the source of one’s dignity. Paradoxically, Hildebrandt argues that it humanity has this natality essentially and so claims that predictive technology might truly know the data subject are hubris, but also that the use of these predictive technologies is threat to natality unless their use is limited by data protection laws that ensure contestability of automated decisions.

Teubner (2006) takes a somewhat broader and, in my view, more self-consistent view. Grounding his argument firmly in Luhmann and Latour, Teubner is interested in the grounds of legally recognized (as opposed to ontologically, philosophically sanctified) personhood. And, he finds, the conditions of personhood can apply to many things besides humans! “Black box, double contingency, and addressability”, three fictions on which the idea of personhood depend, can apply to corporations and electronic agents as well as humans individually. This provides a kind of consistency and rationale for why we allow these kinds of entities to engage in legal contracts with each other. The contract, it is theorized, is a way of managing uncertainty, reducing the amount of contingency in the inherent “double contingency”-laden relationship.

Something of the old Kantian position comes through in Teubner, in that contracts and the law are regulatory. However, Teubner, like Nissenbaum, is ultimately a pluralist. Teubner writes about multiple “ecologies” in which the subject is engaged, and to which they are accountable in different modalities. So, the person, qua economic agent, is addressed in terms of their preferences. But the person, qua legal institutions, is addressed in terms of their embodiment of norms. The “whole person” does not appear in any singular ecology.

I’m sympathetic with the Teubnerian view here, perhaps in contrast with Hildebrandt’s view, the the following sense: while there may indeed be some intrinsic indeterminacy to an individual, this indeterminacy is meaningless unless it is also situated in (some) social ecology. However, what makes a person contingent visa vie one ecology is precisely that only a fragment of them is available to that ecology. The contingency to the first ecology is a consequence of their simultaneous presence within other ecologies. The person is autonomous, and hence also unpredictable, because of this multiplied, fragmented identity. Teubner, I think correctly, concludes that there is a limited form of personhood to non-human agents, but as these agents will be even more fragmented than humans, they are only persons in an attenuated sense.

I’d argue that Teubner helpfully backfills how personhood is socially constructed and accomplished, as opposed to guaranteed from birth, in a way that complements Hildebrandt nicely. In the 2019 article cited here, Hildebrandt argues for contestability of automated decisions as a means of preserving privacy. Teubner’s theory suggests that personhood–as participant in double contingency, as a black box–is threatened rather by context collapse, or the subverting of the various distinct social ecologies into a single platform in which data is shared ubiquitously between services. This provides a normative a universalist defense of keeping contexts separate (which in a different article Hildebrandt connects to purpose binding in the GDPR) which is never quite accomplished in, for example, Nissenbaum’s contextual integrity.

References

Hildebrandt, Mireille. “Privacy as protection of the incomputable self: From agnostic to agonistic machine learning.” Theoretical Inquiries in Law 20.1 (2019): 83-121.

Teubner, Gunther. “Rights of non‐humans? Electronic agents and animals as new actors in politics and law.” Journal of Law and Society 33.4 (2006): 497-521.

System 2 hegemony and its discontents

Recent conversations have brought me back to the third rail of different modalities of knowledge and their implications for academic disciplines. God help me. The chain leading up to this is: a reminder of how frustrating it was trying to work with social scientists who methodologically reject the explanatory power of statistics, an intellectual encounter with a 20th century “complex systems” theorist who also didn’t seem to understand statistics, and the slow realization that’s been bubbling up for me over the years that I probably need to write an article or book about the phenomenology of probability, because I can’t find anything satisfying about it.

The hypothesis I am now entertaining is that probabilistic or statistical reasoning is the intellectual crux, disciplinarily. What we now call “STEM” is all happy to embrace statistics as its main mode of empirical verification. This includes the use of mathematical proof for “exact” or a priori verification of methods. Sometimes the use of statistics is delayed or implicit; there is qualitative research that is totally consistent with statistical methods. But the key to this whole approach is that the fields, in combination, are striving for consistency.

But not everybody is on board with statistics! Why is that?

One reason may be because statistics is difficult to learn and execute. Doing probabilistic reasoning correctly is at times counter-intuitive. That means that quite literally it can make your head hurt to think about it.

There is a lot of very famous empirical cognitive psychology that has explored this topic in depth. The heuristics and biases research program of Kahneman and Tversky was critical for showing that human behavior rarely accords with decision-theoretic models of mathematical, probabilistic rationality. An intuitive, “fast”, prereflective form of thinking, (“System 1”) is capable of making snap judgments but is prone to biases such as the availability heuristic and the representativeness heuristic.

A couple general comments can be made about System 1. (These are taken from Tetlock’s review of this material in Superforecasting). First, a hallmark of System 1 is that it takes whatever evidence it is working with as given; it never second-guesses it or questions its validity. Second, System 1 is fantastic at provided verbal rationalizations and justifications of anything that it encounters, even when these can be shown to be disconnected from reality. Many colorful studies of split brain cases, but also many other lab experiments, show the willingness people have to make of stories to explain anything, and their unwillingness to say, “this could be due to one of a hundred different reasons, or a mix of them, and so I don’t know.”

The cognitive psychologists will also describe a System 2 cognitive process that is more deliberate and reflective. Presumably, this is the system that is sometimes capable of statistical or otherwise logical reasons. And a big part of statistical reasoning is questioning the source of your evidence. A robust application of System 2 reasoning is capable of overcoming System 1’s biases. At the level of institutional knowledge creation, the statistical sciences are comprised mainly of formalized, shared results of System 2 reasoning.

Tetlock’s work, from Expert Political Judgment and on, is remarkable for showing that deference to one or the other cognitive system is to some extent a robust personality trait. Famously, those of the “hedgehog” cognitive style, who apply System 1 and a simplistic theory of the world to interpret everything they experience, are especially bad at predicting the outcomes of political events (what are certainly the results of ‘complex systems’), whereas the “fox” cognitive style, which is more cautious about considering evidence and coming to judgments, outperforms them. It seems that Tetlock’s analysis weighs in favor of System 2 as a way of navigating complex systems.

I would argue that there are academic disciplines, especially those grounded in Heideggerian phenomenology, that see the “dominance” of institutions (such as academic disciplines) that are based around accumulations of System 2 knowledge as a problem or threat.

This reaction has several different guises:

  • A simple rejection of cognitive psychology, which has exposed the System 1/System 2 distinction, as “behaviorism”. (This obscures the way cognitive psychology was a major break away from behaviorism in the 50’s.)
  • A call for more “authentic experience”, couched in language suggesting ownership or the true subject of one’s experience, contrasting this with the more alienated forms of knowing that rely on scientific consensus.
  • An appeal to originality: System 2 tends to converge; my System 1 methods can come up with an exciting new idea!
  • The interpretivist methodological mandate for anthropological sensitivity to “emic”, or directly “lived experience”, of research subjects. This mandate sometimes blurs several individually valid motivations, such as: when emic experience is the subject matter in its own right, but (crucially) with the caveat that the results are not generalizable; when emic sensitivity is identified via the researcher’s reflexivity as a condition for research access; or when the purpose of the work is to surface or represent otherwise underrepresented views.

There are ways to qualify or limit these kinds of methodologies or commitments that makes them entirely above reproach. However, under these limits, their conclusions are always fragile. According to the hegemonic logic of System 2 institutions, a consensus of those thoroughly considering the statistical evidence can always supercede the “lived experience” of some group or individual. This is, at the methodological level, simply the idea that while we may make theory-laden observations, when those theories are disproved, those observations are invalidated as being influenced by erronenous theory. Indeed, mainstream scientific institutions take as their duty this kind of procedural objectivity. There is no such thing as science unless a lot of people are often being proven wrong.

This provokes a great deal of grievance. “Who made scientists, an unrepresentative class of people and machines disconnected from authentic experience, the arbiter of the real? Who are they to tell me I am wrong, or my experiences invalid?” And this is where we start to find trouble.

Perhaps most troubling is how this plays out at the level of psychodynamic politics. To have one’s lived experiences rejected, especially those lived experiences of trauma, and especially when those experiences are rejected wrongly, is deeply disturbing. One of the more mighty political tendencies of recent years has been the idea that whole classes of people are systematically subject to this treatment. This is one reason, among others, for influential calls for recalibrating the weight given to the experiences of otherwise marginalized people. This is what Furedi calls the therapeutic ethos of the Left. This is slightly different from, though often conflated with, the idea that recalibration is necessary to allow in more relevant data that was being otherwise excluded from consideration. This latter consideration comes up in a more managerialist discussion of creating technology that satisfies diverse stakeholders (…customers) through “participatory” design methods. The ambiguity of the term “bias”–does it mean a statistical error, or does it mean any tendency of an inferential system at all?–is sometimes leveraged to accomplish this conflation.

It is in practice very difficult to disentangle the different psychological motivations here. This is partly because they are deeply personal and mixed even at the level of the individual. (Highlighting this is why I have framed this in terms of the cognitive science literature). It is also partly because these issues are highly political as well. Being proven right, or wrong, has material consequences–sometimes. I’d argue: perhaps not as often as it should. But sometimes. And so there’s always a political interest, especially among those disinclined towards System 2 thinking, in maintaining a right to be wrong.

So it is hypothesized (perhaps going back to Lyotard) that at an institutional level there’s a persistent heterodox movement that rejects the ideal of communal intellectual integrity. Rather, it maintains that the field of authoritative knowledge must contain contradictions and disturbances of statistical scientific consensus. In Lyotard’s formulation, this heterodoxy seeks “legitimation by paralogy”, which suggests that its telos is at best a kind of creative intellectual emancipation from restrictive logics, generative of new ideas, but perhaps at worst a heterodoxy for its own sake.

This tendency has an uneasy relationship with the sociopolitical motive of a more integrated and representative society, which is often associated with the goal of social justice. If I understand these arguments directly, the idea is that, in practice, legitimized paralogy is a way of giving the underrepresented a platform. This has the benefits of increasing, visibly, representation. Here, paralogy is legitimized as a means of affirmative action, but not as a means improving system performance objectively.

This is a source of persistent difficulty and unease, as the paralogical tendency is never capable of truly emancipating itself, but rather, in its recuperated form, is always-already embedded in a hierarchy that it must deny to its initiates. Authenticity is subsumed, via agonism, to a procedural objectivity that proves it wrong.

Looking for references: phenomenology of probability

A number of lines of inquiry have all been pointing in the same direction for me. I now have a question and I’m on the lookout for scholarly references on it. I haven’t been able to find anything useful through my ordinary means.

I’m looking for a phenomenology of probability.

Hopefully the following paragraphs will make it clearer what I mean.

By phenomenology, I mean a systematic account (-ology) of lived experience (phenomen-). I’m looking for references especially in the “cone” of influences on Merleau-Ponty, and the “cone” of those influenced by Merleau-Ponty.

By probability, I mean the whole gestalt of uncertainty, expectation, and realization that is normally covered by the mathematical subject. The simplest example is the experience of tossing a coin. But there are countless others; this is a ubiquitous mode of phenomenon.

There is at least some indication that this phenomenon is difficult to provide a systematic account for. Probabilistic reasoning is not a very common skill. Perhaps the best account of this that I can think of is in Philip Tetlock’s Superforecasting, in which he reports that a large proportion of people are able to intuit only two kinds of uncertainty (“probably will happen” or “probably won’t happen”), another portion can reason in three (“probably will”, “probably won’t”, and “I don’t know”). For some people, asking for graded expectations (“I think there’s a 30% chance it will happen”) is more or less meaningless.

Nevertheless, all the major quantitative institutions–finance, telecom, digital services, insurance, the hard sciences, etc.–thrive on probabilistic calculations. Perhaps there’s a concentration here.

The other consideration leading towards the question of phenomenology of probability is the question of the interpretation of mathematical probability theory. As is well known, the same mathematics can be interpreted in multiple ways. There is an ‘objective’, frequentist interpretation, according to which probability is the frequency of events in the world. But with the rise of machine learning ‘subjectivist’ or Bayesian interpretations became much more popular. Bayesian probability is a calculus of rational subjective expectations, and transformation of those expectations, according to new evidence.

So far in my studies and research, I’ve never encountered a synthesis of Merleau-Pontean phenomenology with the subjectivist intepretation of probability. This is somewhat troubling.

Is there a treatment of this anywhere?

Instrumental realism — a few key points

Continuing my reading of Ihde (1991), I’m getting to the meat of his argument where he compares and constrasts his instrumental realist position with two contemporaries: Heelan (1989), whom Ihde points out is a double doctorate in physics and philosophy and so might be especially capable of philosophizing about physics praxis, and Hacking (1983), who is from my perspective the most famous of the three.

Ihde argues that he, Hacking, and Heelan are all more or less instrumental realists, but that Ihde and Heelan draw more from the phenomenological tradition, which emphasizes embodied perception and action, whereas Hacking is more in the Anglo-American ‘analytic’ tradition of starting from analysis of language. Ihde’s broader argument in the book is one of convergence: he uses the fact that many different schools of thought have arrived at similar conclusions to support the idea that those conclusions are true. That makes perfect sense to me.

Broadly speaking, instrumental realism is a position that unites philosophy of science with philosophy of technology to argue that:

  • That science is able to grasp, understand, theorize the real
  • That this reality is based on embodied perception and praxis. Or, in the more analytic framing, on observation and experiment.
  • That scientific perception and praxis is able to go “beyond” normal, every-day perception and praxis because of its use of scientific instruments, of which the microscope is a canonical example.
  • This position counters many simple relativistic threats to scientific objectivity and integrity, but does so by placing emphasis on scientific tooling. Science advances, mainly, by means of the technologies and infrastructures that it employs.
  • This position is explicitly embodied and materialist, counter to many claims that scientific realism depends on its being disembodied or transcendental.

This is all very promising though there are nuances to work out. Ihde’s study of his contemporaries is telling.

Ihde paints Heelan as a compelling thinker on this topic, though a bit blinkered by his emphasis on physics as the true or first science. Heelean’s view of scientific perception is that it is always both perception and measurement. Being what Ihde calls a “Euro-American” (which I think is quite funny), Ihde can describe him as therefore saying that scientific observation is both a matter of perception-praxis and a matter of hermeneutics–by which I mean the studying of a text in community with others or, to use the more Foucauldean term, “discourse”. Measurement, somewhat implicitly here is a kind of standardized way of “reading”. Ihde makes a big deal out of the subtle differences between “seeing” and “reading”.

To the extent that “discourse”, “hermeneutics”, “reading”, etc. imply a weakness of the scientific standpoint, they weigh against the ‘realism’ of instrumental realism. However, the term measurement is telling in that the difference between, say, different units of measurement of length, mass, time, etc. does not challenge the veracity of the claim “there are 24 hours in a day” because translating between different units is trivial.

Ihde characterizes Hacking as a fellow traveler, converging on instrumental realism when he breaks from his own analytic tradition to point out that experiment is one of the most important features of science, and that experiment depends on and is advanced by instrumentation. Ihde writes that Hacking is quite concerned about “(a) how an instrument is made, particularly with respect to theory-driven design, and (b) the physical processes entailed in the “how” or conditions of use.” Which makes perfect sense to me–that’s exactly what you’d want to scrutinize if you’d taking the ‘realism’ in instrumental realism seriously.

Ihde’s positions here, as the positions of his contemporaries, seem perfectly reasonable to me. I’m quite happy to adopt this view; it corresponds to conclusions I’ve reached in my own reading and practice and it’s nice to have a solid reference and term for it.

The questions that come up next are how instrumental realism applies to today’s controversies about science and technology. Just a handful of notes here:

  • I work quite a bit with scientific sofware. It’s quite clear to me that scientific software development is a major field of scientific instrumentation today. Scientists “see” and “do” via computers and software controls. This has made “data science” a core aspect of 21st century science in general, as it’s the part of science that is closest to the instrumentation. This confirms my long-held view that scientific software communities are the groups to study if you’re trying to understand sociology of science today.
  • On the other hand, it’s becoming increasingly clear in scientific practice that you can’t do software-driven science without the Internet and digital services, and these are now controlled by an oligopoly of digital services conglomerates. The hardware infrastructure–data centers, caching services, telecom broadly speaking, cloud computing hubs–goes far beyond the scientific libraries. Scientific instrumentation depends critically now on mass corporate IT.
  • These issues are compounded by how Internet infrastructure–now privately owned and controlled for all intents and purposes–is also the instrument of so much social science research. Don’t get me started on social media platforms as research tools. For me, the best resource on this is Tufekci, 2014.
  • The most hot-button, politically charged critique in the philosophy of science space is that science and/or data science and/or AI as it is currently constituted is biased because of who is represented in these research communities. The position being contested is the idea that AI/data science/computational social science etc. is objective because it is designed in a way that aligns with mathematical theory.
    • I would be very interested to read something connecting postcolonial, critical race, and feminist AI/data science practices to instrumental realism directly. I think these groups ought to be able to speak to each other easily, since the instrumental realists from the start are interested in the situated embodiment of the observer.
    • On the other hand, I think it would be difficult for the critical scholars to find fault in the “hard core” of data science/computing/AI technologies/instruments because, truly, they are designed according to mathematical theory that is totally general. This is what I think people mean when they say AI is objective because it’s “just math”. AI/data science praxis makes you sensitive to what aspects of the tooling are part of the core (libraries of algorithms, based on vetted mathematical theorems) and what are more incidental (training data sets, for example, or particular parameterizations of the general algorithms). If critical scholars focused on these parts of the scientific “stack”, and didn’t make sweeping comments that sound like they implicate the “core”, which we have every reason to believe is quite solid, they would probably get less resistance.
    • On the other hand, if science is both a matter of perception-praxis and hermeneutics, then maybe the representational concerns are best left on the hermeneutic side of the equation.

References

Hacking, I. (1983). Representing and Intervening: Introductory Topics in the Philosophy of Natural Science.

Heelan, P. A. (1989). Space-perception and the philosophy of science. Univ of California Press.

Ihde, D. (1991). Instrumental realism: The interface between philosophy of science and philosophy of technology (Vol. 626). Indiana University Press.

Tufekci, Z. (2014, May). Big questions for social media big data: Representativeness, validity and other methodological pitfalls. In Eighth International AAAI Conference on Weblogs and Social Media.

Considering Agre: More Merleau-Ponty, less Heidegger, in technology design, please

I’ve had some wonderful interlocutors lately. One (in private, and therefore anonymously) has recommended Don Ihde’s postphenomenology of science. I’ve been reading and enjoying Ihde’s Instrumental Realism (1991) and finding it very fruitful. Ihde is influential in some contemporary European theories of the interaction between law and technology. Tapan Parikh has (on Twitter) asked me why I haven’t been engaging more with Agre (e.g., 1997). I’ve been reminded by him and others of work in “critical HCI”, a field I encountered a lot in graduate school, which has its roots in, perhaps, Suchman (1987).

I don’t like and have never liked critical HCI and have resented its pretensions of being “more ethical” than other fields of technological design and practice for many years. I state this as a psychological fact, not as an objective judgment of the field. This morning I’m taking a moment to meditate on why I feel this way, and what that means for my work.

Agre (1997) has some telling anecdotes about being an AI researcher at MIT and becoming disillusioned upon encountering phenomenology and ethnomethodological work. His problem began with a search for originality.

My college did not require me to take many humanities courses, or learn to write in a professional register, and so I arrived in graduate school at MIT with little genuine knowledge beyond math and computers. …

My lack of a liberal education, it turns out, was only half of my problem. Only much later did I understand the other half, which I attribute to the historical constitution of AI as a field. A graduate student is responsible for finding a thesis topic, and this means doing something new. Yet I spent much of my first year, and indeed the next couple of years after my time away, trying very hard in vain to do anything original. Every topic I investigated seemed driven by its own powerful internal logic into a small number of technical solutions, each of which had already been investigated in the literature. …

Often when I describe my dislike for e.g. Latour, people assume that I’m on a similar educational path to Agre’s: that I am a “technical person”, perhaps with a “mathematical mind”, that I’ve never encountered any material that would challenge what has now solidified as the STEM paradigm.

That’s a stereotype that does not apply to me. For better or for worse, I had a liberal arts undergraduate education with exposure to technical subjects, social sciences, and the humanities. My graduate school education was similarly interdisciplinary.

There are people today who are advocates of critical HCI and design practices in the tradition of Suchman, Agre, and so on that have a healthy exposure to STEM education. There are also many who do not and employ this material as a kind of rear guard action to treat any less “critical” work as intrinsically tainted with the same hubris that the AI field did in, say, the 80’s. This is ahistorical and deeply frustrating. These conversations tend to end when the “critical” scholar insists on the phenomenological frame–arguing either implicitly or explicitly that (post-)positivism is unethical in and of itself.

It’s worth tracing the roots of this line of reasoning. Often, variations of it are deployed rhetorically in service of the cause of bringing greater representation of marginalized people into the field of technical design. It’s somewhat ironic that, as Duguid (2012) helpfully points out, this field of “critical” technology studies, drawing variously on Suchman, Dreyfus, Agre, and ultimately Latour and Woolgar, is ultimately Heidegger. Heidegger’s affiliation with Nazism is well-known, boring, and in no way a direct refutation of the progressive deployments for critical design.

But back to Agre, who goes on to discuss his conversion to phenomenology. Agre’s essay is largely an account of his rejection of the project of technical creation as a goal.

… I was unable to turn to other, nontechnical fields for inspiration. … The problem was not exactly that I could not understand the vocabulary, but that I insisted on trying to read everything as a narration of the workings of a mechanism. By that time much philosophy and psychology had adopted intellectual styles similar to that of AI, and so it was possible to read much that was congenial — except that it reproduced the same technical schemata as the AI literature. …

… I was also continually noticing the many small transformations that my daily life underwent as a result of noticing these things. As my intuitive understanding of the workings of everyday life evolved, I would formulate new concepts and intermediate on them, whereupon the resulting spontaneous observations would push my understanding of everyday life even further away from the concepts that I had been taught. … It is hard to convey the powerful effect that this experience had upon me; my dissertation (Agre 1988), once I finally wrote it, was motivated largely by a passion to explain to my fellow AI people how our AI concepts had cut us off from an authentic experience of our own lives. I still believe this.

Agre here is connecting the hegemony of cognitive psychology and AI in whenever he is writing about to his realization that “authentic experience” had been “cut off”. This is so Heideggerean. Agre is basically telling us that he independently came to Heidegger’s conclusions because of his focus on “everyday life”.

This binary between “everyday life” or “lived experience” on the one hand and the practice of AI design is repeated often by critical scholars today. Critical scholars with no practical experience in contemporary data science often assume that the AI of the 80’s is the same as machine learning practice today. This is an unsupported assumption directly contradicted by the lived experience of those who work in technical fields. Unfortunately, the success of the Heideggerean binary allows those whose lived experience is “not technical” to claim that their experience has a kind of epistemic or ethical priority, due to its “authenticity”, over more technical experience.

This is devastating for the discourse around now ubiquitous and politically vital topics around the politics of technology. If people have to choose between either doing technical work or doing critical Heideggerean reflection on that work, then by definition all technical work is uncritical and therefore lacking in the je ne se quoi that gives it “ethical” allure. In my view, this binary is counterproductive. If “criticality” never actually meets technical practice, then it can never be a way to address problems caused by poor technical design. Rather, it can only be a form of institutional sublimation of problematic technical practices. The critical field is sustained by, parasitic on, bad technical design: if the technology were better, then the critical field would not be able to feed so successful on the many frustrations and anxieties of those that encounter it.

Agre ultimately gives up on AI to go critical full time.

… My purpose here, though, is to describe how this experience led me into full-blown dissidence within the field of AI. … In order to find words for my newfound intuitions, I began studying several nontechnical fields. Most importantly, I sought out those people who claimed to be able to explain what is wrong with AI, including Hubert Dreyfus and Lucy Suchman. They, in turn, got me started reading Heidegger’s Being and Time (1961 [1927]) and Garfinkel’s Studies in Ethnomethodology (1984 [1967]). At first I found these texts impenetrable, not only because of their irreducible difficulty but also because I was still tacitly attempting to read everything as a specification for a technical mechanism. That was the only protocol of reading that I knew, and it was hard even to conceptualize the possibility of alternatives. (Many technical people have observed that phenomenological texts, when read as specifications for technical mechanisms, sound like mysticism. This is because Western mysticism, since the great spiritual forgetting of the later Renaissance, is precisely a variety of mechanism that posits impossible mechanisms.) My first intellectual breakthrough came when, for reasons I do not recall, it finally occurred to me to stop translating these strange disciplinary languages into technical schemata, and instead simply to learn them on their own terms.

What’s quite frustrating for somebody who is approaching this problem from a slightly broader liberal arts background than Agre did is that he is writing about encounters with only one of several different phenomenological traditions–the Heideggerean one–that have made it so successfully into American academic HCI.

This is where Don Ihde’s work is great: he is explicitly engaged in a much wider swathe of the Continental cannon. In doing so, he goes to the root of phenomenology, Husserl, and, I believe most significantly, Merleau-Ponty.

Merleau-Ponty’s Phenomenology of Perception is the kind of serious, monumental work that nobody in the U.S. bothers to read because it is difficult for them to think about. When humanities education is a form of consumerism, it’s much more fun to read, I don’t know, Haraway. But as a theoretical work that combines the phenomenological tradition with empirical psychology in a way that is absolutely and always about embodiment–all the particularities of being a body and what that means for our experiences of the world–you can’t beat him.

Because Merleau-Ponty is engaged mainly with perception and praxis, rather than hermeneutics (the preoccupation of Heidegger), he is able to come up with a much more muscular account of lived experience with machines without having to dress it up in terminology about ‘cyborgs’. This excerpt, from Ihde, is illustrative:

The blind man’s tool has ceased to be an object for him, and is no longer perceived for itself; its point has become an area of sensitivity, extending the scope and active radius of touch, and providing a parallel to sight. In th exploration of things, the length of the stick does not enter expressly as a middle term: The blind man is rather aware of it through the position of objects than the position of objects through it.

In my view, it’s Merleau-Ponty’s influence that most sets up Ihde to present a productive view of instrumental realism in science, based on the role of instruments in the perception and praxis of science. This is what we should be building on when we discuss the “philosophy of data science” and other software-driven research.

Dreyfus’s (1976) famous critique of AI drew a lot on Merleau-Ponty. Dreyfus is not brought up very much in the critical literature any more because (a) many of his critiques were internalized by the AI community and led to new developments that don’t fall prey to the same criticisms, (b) people are building all kinds of embodied robots now, and (c) the “Strong AI” program, of building AI that is so much like a human mind, has not been what’s been driving AI recently: industrial applications that scale far beyond the human mind are.

So it may be that Merleau-Ponty is not used as a phenomenological basis for studying AI and technology now because it is both successfully about lived experience but it does not imply that the literature of some more purely hermeneutic field of inquiry is separately able to underwrite the risks of technical practice. If instruments are an extension of the body, then that implies that the one who uses those instruments in responsible for them. That would imply that, for example, Zuckerberg is not an uncritical technologist who has built an autonomous system that is poorly designed because of the blind spots of engineering practice, but rather than he is the responsible actor leading the assemblage that is Facebook as an extension of himself.

Meanwhile, technical practice (I repeat myself) has changed. Agre laments that “[f]ormal reason has an unforgiving binary quality — one gap in the logic and the whole thing collapses — but this phenomenological language was more a matter of degree”. Indeed, when AI was developing along the lines of “formal reason” in the sense of axiomatic logic, this constraint would be frustrating. But in the decades since Agre was working, AI practice has become much more a “matter of degree”: it is highly statistical and probabilistic, depending on very broadly conceived spaces of representation that tune themselves based on many minute data points. Given the differences between “good old fashioned AI” based on logical representation and contemporary machine learning, it’s just bewildering when people raise these old critiques as if they are still meaningful and relevant to today’s practice. And yet the themes resurface again and again in the pitch battles of interdisciplinary warfare. The Heideggereans continue to renounce mathematics, formalism, technology, etc. as a practice in itself in favor of vague humanism. There’s a new articulation of these agenda every year, under different political guises.

Telling is how Agre, who began the journey trying to understand how to make a contribution to a technical field, winds up convincing himself that there are a lot of great academic papers to be written with no technical originality or relevance.

When I tried to explain these intuitions to other AI people, though, I quickly discovered that it is useless to speak nontechnical languages to people who are trying to translate these languages into specifications for technical mechanisms. This problem puzzled me for years, and I surely caused much bad will as I tried to force Heideggerian philosophy down the throats of people who did not want to hear it. Their stance was: if your alternative is so good then you will use it to write programs that solve problems better than anybody else’s, and then everybody will believe you. Even though I believe that building things is an important way of learning about the world, nonetheless I knew that this stance was wrong, even if I did not understand how.

I now believe that it is wrong for several reasons. One reason is simply that AI, like any other field, ought to have a space for critical reflection on its methods and concepts. Critical analysis of others’ work, if done responsibly, provides the field with a way to deepen its means of evaluating its research. It also legitimizes moral and ethical discussion and encourages connections with methods and concepts from other fields. Even if the value of critical reflection is proven only in its contribution to improved technical systems, many valuable criticisms will go unpublished if all research papers are required to present new working systems as their final result.

This point is echoed almost ten years later by another importer of ethnomethodological methods into technical academia, Dourish (2006). Today, there are academic footholds for critical work about technology, and some people write a lot of papers about it. More power to them, I guess. There is a now a rarified field of humanities scholarship in this tradition.

But when social relations truly are mediate by technology in myriad ways, it is perhaps not wrong to pursue lines of work that have more practical relevance. Doing this requires, in my view, a commitment to mathematical rigor and getting ones hands “dirty” with the technology itself, when appropriate. I’m quite glad that there are venues to pursue these lines now. I am somewhat disappointed and annoyed that I have to share these spaces with Heideggereans, who I just don’t see as adding much beyond the recycling of outdated tropes.

I’d be very excited to read more works that engage with Merleau-Ponty and work that builds on him.

References

Agre, P. E. (1997). Lessons learned in trying to reform AI. Social science, technical systems, and cooperative work: Beyond the Great Divide (1997)131. (link)

Dourish, P. (2006, April). Implications for design. In Proceedings of the SIGCHI conference on Human Factors in computing systems (pp. 541-550).

Duguid, P. (2012). On Rereading Suchman and Situated Action. Le Libellio d’AEGIS8(2), 3-11.

Dreyfus, H. (1976). What computers can’t do.

Ihde, D. (1991). Instrumental realism: The interface between philosophy of science and philosophy of technology (Vol. 626). Indiana University Press.

Suchman, L. A. (1987). Plans and situated actions: The problem of human-machine communication. Cambridge university press.

Winograd, T. & Flores, F. (1986). Understanding computers and cognition: A new foundation for design. Intellect Books.

A brief revisit of the Habermas/Luhmann debate

I’ve gotten into some arguments with friends recently about the philosophy of science. I’m also finding myself working these days, yet again, at a disciplinary problem. By which I mean, the primary difficulty of the research questions I’m asking at the moment is that there is no discipline that in its primary self-understanding asks those questions.

This and the coronavirus emergency have got me thinking, “What ever happened to the Habermas/Luhmann debate?” It is a good time to consider this problem because it’s one that’s likely to minimize my interactions with other people at a time when this one’s civic duty.

I refer to Rasch (1991) for an account of it. Here is a good paragraph summarizing some of the substance of the debate.

It is perhaps in this way that Luhmann can best be distinguished from Habermas. The whole movement of Habermas’s thought tends to some final resting place, prescriptively in the form of consensus as the legitimate basis for social order, and methodologically in the form of a normative underlying simple structure which is said to dictate the proper shape of surface complexity. But for Luhmann, complexity does not register the limits of human knowledge as if those limits could be overcome or compensated for by the reconstruction of some universal rule-making process. Rather, complexity, defined as the paradoxical task of solving a solved problem that cannot be solved, or only provisionally solved, or only solved by creating new problems, is the necessary ingredient for human intellectual endeavors. Complexity always remains complex and serves as a self-replenishing reservoir of possibilities (1981, 203-4). Simply put, complexity is limited understanding. It is the missing information which makes it impossible to comprehend a system fully (1985, 50-51; 1990, 81), but the absence of that information is absolutely unavoidable and paradoxically essential for the further evolution of complexity.

Rasch, 1991

In other words, Habermas believes that it’s possible, in principle, to reach a consensus around social order that is self-legitimizing and has at its core a simple, even empty, observer’s stance. This is accomplished through rational communicative action. Luhmann, on the other hand, sees the fun-house of perspectivalist warped mirrors and no such fixed point or epistemological attractor state.

But there’s another side to this debate which is not discussed so much in the same context. Habermas, by positing a communicative rationality capable of legitimization, is able to identify the obstacles to it: the “steering media”, money and power (Habermas, 1987). Whereas Luhmann understands a “social system” to be constituted by the communication within it. A social system is defined as the sum total of its speech, writing, and so on.

This has political implications. Rasch concludes:

With that in mind, one final paradox needs to be mentioned. Although Habermas is the self-identified leftist and social critic, and although Habermas sees in Luhmann and in systems theory a form of functionalist conservatism, it may very well be to Luhmann that future radical theorists will have to turn. Social and political theorists who are socially and politically committed need not continue to take theoretical concern with complexity as a sign of apathy, resignation, or conformism.’9 As Harlan Wilson notes, the “invocation of ‘complexity’ for the purpose of devaluing general political and social theory and of creating suspicion of all varieties of general political theory in contemporary political studies is to be resisted.” It is true that the increased consciousness of complexity brings along with it the realization that “total comprehension” and “absence of distortion” are unattainable, but, Wilson continues, “when that has been admitted, it remains that only general theoretical reflection, together with a sense of history, enables us to think through the meaning of our complex social world in a systematic way” (1975, 331). The only caveat is that such “thinking through” will have to be done on the level of complexity itself and will have to recognize that theories of social complexity are part of the social complexity they investigate. It is in this way that the ability to respond to social complexity in a complex manner will continue to evolve along with the social complexity that theory tries to understand

Rasch, 1991

One reason that Habermas is able to make a left-wing critique, whereas Luhmann can correctly be accused of being a functionalist conservative, is that Habermas’s normative stance has an irrational materialist order (perhaps what is “right wing” today) as its counterpoint. Whereas Luhmann, in asserting that social systems exist only as functional stability, does not seem to have money, power, or ultimately the violence they depend on in his ontology. It is a conservative view not because his theory lacks normativity, but because his descriptive stance is, at the end of the day, incomplete. Luhmann has no way of reckoning with the ways infrastructural power (Mann, 2008) exerts a passive external force on social systems. In other words, social systems evolve, but in an environment created by the material consequences of prior social systems, which reveal themselves as distributions of capital. This is what it means to be in the Anthropocene.

During an infrastructural crisis, such as a global pandemic in which the violence of nature threatens objectified human labor and the material supply chains that depend on it, society, which often in times of “peace” happy to defer to “cultural” experts whose responsibility is the maintenance of ideology, defers to different experts: the epidemiologists, the operations research experts, the financial analysts. These are the occupational “social scientists” who have no need of the defensiveness of the historian, the sociologist, the anthropologist, or the political scientist. They are deployed, sometimes in the public interest, to act on their operationally valid scientific consensus. And precisely because the systems that concern them are invisible to the naked eye (microbes, social structure, probabilities) the uncompromising, atheoretical empiricism that has come to be the proud last stand of the social sciences cannot suffice. Here, theory–an accomplishment of rationality, its response to materialist power–must shine.

The question, as always, is not whether there can be progress based on a rational simplification, but to what extent and economy supports the institutions that create and sustain such a perspective, expertise, and enterprise.

References

Habermas, Jürgen. “The theory of communicative action, Volume 2: Lifeworld and system.” Polity, Cambridge (1987).

Mann, Michael. “Infrastructural power revisited.” Studies in comparative international development 43.3-4 (2008): 355.

Rasch, William. “Theories of complexity, complexities of theory: Habermas, Luhmann, and the study of social systems.” German Studies Review 14.1 (1991): 65-83.

The diverging philosophical roots of U.S. and E.U. privacy regimes

For those in the privacy scholarship community, there is an awkward truth that European data protection law is going to a different direction from U.S. Federal privacy law. A thorough realpolitical analysis of how the current U.S. regime regarding personal data has been constructed over time to advantage large technology companies can be found in Cohen’s Between Truth and Power (2019). There is, to be sure, a corresponding story to be told about EU data protection law.

Adjacent, somehow, to the operations of political power are the normative arguments leveraged both in the U.S. and in Europe for their respective regimes. Legal scholarship, however remote from actual policy change, remains as a form of moral inquiry. It is possible, still, that through professional training of lawyers and policy-makers, some form of ethical imperative can take root. Democratic interventions into the operations of power, while unlikely, are still in principle possible: but only if education stays true to principle and does not succumb to mere ideology.

This is not easy for educational institutions to accomplish. Higher education certainly is vulnerable to politics. A stark example of this was the purging of Marxist intellectuals from American academic institutions under McCarthyism. Intellectual diversity in the United States has suffered ever since. However, this was only possible because Marxism as a philosophical movement is extraneous to the legal structure of the United States. It was never embedded at a legal level in U.S. institutions.

There is a simply historical reason for this. The U.S. legal system was founded under a different set of philosophical principles; that philosophical lineage still impacts us today. The Founding Fathers were primarily influenced by John Locke. Locke rose to prominence in Britain when the Whigs, a new bourgeois class of Parliamentarian merchant leaders, rose to power, contesting the earlier monarchy. Locke’s political contributions were a treatise pointing out the absurdity of the Divine Right of Kings, the prevailing political ideology of the time, and a second treatise arguing for a natural right to property based on the appropriation of nature. This latter political philosophy was very well aligned with Britain’s new national project of colonialist expansion. With the founding of the United States, it was enshrined into the Constitution. The liberal system of rights that we enjoy in the U.S. are founded in the Lockean tradition.

Intellectual progress in Europe did not halt with Locke. Locke’s ideas were taken up by David Hume, whose introduced arguments that were so agitating that they famously woke Immanuel Kant, in Germany, from his “dogmatic slumber”, leading him to develop a new highly systematic system of morality and epistemology. Among the innovations in this work was the idea that human freedom is grounded in the dignity of being an autonomous person. The source of dignity is not based in a natural process such as the tilling of land. It is rather based in on transcendental facts about what it means to be human. The key to morality is treating people like ends, not means; in other words, not using people as tools to other aims, but as aims in themselves.

If this sound overly lofty to an American audience, it’s because this philosophical tradition has never taken hold in American education. In both the United Kingdom and Britain, Kantian philosophy has always been outside the mainstream. The tradition of Locke, through Hume, has continued on in what philosophers will call “analytic philosophy”. This philosophy has taken on the empiricist view that the only source of knowledge is individual experience. It has transformed over centuries but continues to orbit around the individual and their rights, grounded in pragmatic considerations, and learning normative rules using the case-by-case approach of Common Law.

From Kant, a different “continental philosophy” tradition produced Hegel, who produced Marx. We can trace from Kant’s original arguments about how morality is based on the transcendental dignity of the individual to the moralistic critique that Marx made against capitalism. Capitalism, Marx argued, impugns the dignity of labor because it treats it like a means, not an end. No such argument could take root in a Lockean system, because Lockean ethics has no such prescription against treating others instrumentally.

Germany lost its way at the start of the 20th century. But the post-war regime, funded by the Marshall plan, directed by U.S. constitutional scholars as well as repatriating German intellectuals, had the opportunity to rewrite their system of governance. They did so along Kantian lines: with statutory law, reflecting a priori rational inquiry, instead of empiricist Common Law. They were able to enshrine into their system the Kantian basis of ethics, with its focus on autonomy.

Many of the intellectuals influencing the creation of the new German state were “Marxist” in the loose sense that they were educated in the German continental intellectual tradition which, at that time, included Marx as one of its key figures. By the mid-20th century they had naturally surpassed this ideological view. However, as a consequence, the McCarthyist attack on Marxism had the effect of also purging some of the philosophical connection between German and U.S. legal education. Kantian notions of autonomy are still quite foreign to American jurisprudence. Legal arguments in the United States draw instead on a vast collection of other tools based on a much older and more piecemeal way of establishing rights. But are any of these tools up to the task of protecting human dignity?

The EU is very much influenced by Germany and the German legal system. The EU has the Kantian autonomy ethic at the heart of its conception of human rights. This philosophical commitment has recently expressed itself in the EU’s assertion of data protection law through the GDPR, whose transnational enforcement clauses have brought this centuries-old philosophical fight into contemporary legal debate in legal jurisdictions that predate the neo-Kantian legal innovations of Continental states.

The puzzle facing American legal scholars is this: while industrial advocates and representatives tend to disagree with the strength of the GDPR, arguing that it is unworkable and/or based on poorly defined principle, the data protections that it offer seem so far to be compelling to users, and the shifting expectations around privacy in part induced by it are having effects on democratic outcomes (such as the CCPA). American legal scholars now have to try to make sense of the GDPR’s rules and find a normative basis for them. How can these expansive ideas of data protection, which some have had the audacity to argue is a new right (Hildebrandt, 2015), be grafted onto the the Common Law, empiricist legal system in a way that gives it the legitimacy of being an authentically American project? Is there a way to explain data protection law that does not require the transcendental philosophical apparatus which, if adopted, would force the American mind to reconsider in a fundamental way the relationship between individuals and the collective, labor and capital, and other cornerstones of American ideology?

There may or may not be. Time will tell. My own view is that the corporate powers, which flourished under the Lockean judicial system because of the weaknesses in that philosophical model of the individual and her rights, will instinctively fight what is in fact a threatening conception of the person as autonomous by virtue of their transcendental similarity with other people. American corporate power will not bother to make a philosophical case at all; it will operate in the domain of realpolitic so well documented by Cohen. Even if this is so, it is notable that so much intellectual and economic energy is now being exerted in the friction around a poweful an idea.

References

Cohen, J. E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford University Press, USA.

Hildebrandt, M. (2015). Smart technologies and the end (s) of law: Novel entanglements of law and technology. Edward Elgar Publishing.