Digifesto

Tag: double contingency

Hildebrandt (2013) on double contingency in Parsons and Luhmann

I’ve tried to piece together double contingency before, and am finding myself re-encountering these ideas in several projects. I just now happened on this very succinct account of double contingency in Hildebrandt (2013), which I wanted to reproduce here.

Parsons was less interested in personal identity than in the construction of social institutions as proxies for the coordination of human interaction. His point is that the uncertainty that is inherent in the double contingency requires the emergence of social structures that develop a certain autonomy and provide a more stable object for the coordination of human interaction. The circularity that comes with the double contingency is thus resolved in the consensus that is consolidated in sociological institutions that are typical for a particular culture. Consensus on the norms and values that regulate human interaction is Parsons’s solution to the problem of double contingency, and thus explains the existence of social institutions. As could be expected, Parsons’s focus on consensus and his urge to resolve the contingency have been criticized for its ‘past-oriented, objectivist and reified concept of culture’, and for its implicitly negative understanding of the double contingency.

This paragraph says a lot, both about “the problem” posed by “the double contingency”, the possibility of solution through consensus around norms and values, and the rejection of Parsons. It is striking that in the first pages of this article, Hildebrandt begins by challenging “contextual integrity” as a paradigm for privacy (a nod, if not a direct reference, to Nissenbaum (2009)), astutely pointing out that this paradigm makes privacy a matter of delinking data so that it is not reused across contexts. Nissenbaum’s contextual integrity theory depends rather critically on consensus around norms and values; the appropriateness of information norms is a feature of sociological institutions accountable ultimately to shared values. The aim of Parsons, and to some extent also Nissenbaum, is to remove the contingency by establishing reliable institutions.

The criticism of Parsons as being ‘past-oriented, objectivist and reified’ is striking. It opens the question whether Parsons’s concept of culture is too past-oriented, or if some cultures, more than others, may be more past-oriented, rigid, or reified. Consider a continuum of sociological institutions ranging from the rigid, formal, bureaucratized, and traditional to the flexible, casual, improvisational, and innovative. One extreme of these cultures is better conceptualized as “past-oriented” than the other. Furthermore, when cultural evolution becomes embedded in infrastructure, no doubt that culture is more “reified” not just conceptually, but actually, via its transformation into durable and material form. That Hildebrandt offers this criticism of Parsons perhaps foreshadows her later work about the problems of smart information communication infrastructure (Hildebrandt, 2015). Smart infrastructure poses, to those which this orientation, a problem in that it reduces double contingency by being, in fact, a reification of sociological institutions.

“Reification” is a pejorative word in sociology. It refers to a kind of ideological category error with unfortunate social consequences. The more positive view of this kind of durable, even material, culture would be found in Habermas, who would locate legitimacy precisely in the process of consensus. For Habermas, the ideals of legitimate consensus through discursively rational communicative actions finds its imperfect realization in the sociological institution of deliberative democratic law. This is the intellectual inheritor of Kant’s ideal of “perpetual peace”. It is, like the European Union, supposed to be a good thing.

So what about Brexit, so to speak?

Double contingency returns with a vengeance in Luhmann, who famously “debated” Habermas (a more true follower of Parsons), and probably won that debate. Hildebrandt (2013) discusses:

A more productive understanding of double contingency may come from Luhmann (1995), who takes a broader view of contingency; instead of merely defining it in terms of dependency he points to the different options open to subjects who can never be sure how their actions will be interpreted. The uncertainty presents not merely a problem but also a chance; not merely a constraint but also a measure of freedom. The freedom to act meaningfully is constraint [sic] by earlier interactions, because they indicate how one’s actions have been interpreted in the past and thus may be interpreted in the future. Earlier interactions weave into Luhmann’s (1995) emergent social systems, gaining a measure of autonomy — or resistance — with regard to individual participants. Ultimately, however, social systems are still rooted in double contingency of face-to-face communication. The constraints presented by earlier interactions and their uptake in a social system can be rejected and renegotiated in the process of anticipation. By figuring out how one’s actions are mapped by the other, or by social systems in which one participates, room is created to falsify expectations and to disrupt anticipations. This will not necessarily breed anomy, chaos or anarchy, but may instead provide spaces for contestation, self-definition in defiance of labels provided by the expectations of others, and the beginnings of novel or transformed social institutions. As such, the uncertainty inherent in the double contingency defines human autonomy and human identity as relational and even ephemeral, always requiring vigilance and creative invention in the face of unexpected or unreasonably constraining expectations.

Whereas Nissenbaum’s theory of privacy is “admitted conservative”, Hildebrandt’s is grounded in a defense of freedom, invention, and transformation. If either Nissenbaum or Hildebrandt were more inclined to contest each other directly, this may be privacy scholarship’s equivalent of the Habermas/Luhmann debate. However, this is unlikely to occur because the two scholars operate in different legal systems, reducing the stakes of the debate.

We must assume that Hildebrandt, in 2013, would have approved of Brexit, the ultimate defiance of labels and expectations against a Habermasian bureaucratic consensus. Perhaps she also, as would be consistent with this view, has misgivings about the extraterritorial enforcement of the GDPR. Or maybe she would prefer a a global bureaucratic consensus that agreed with Luhmann; but this is a contradiction. This psychologistic speculation is no doubt unproductive.

What is more productive is the pursuit of a synthesis between these poles. As a liberal society, we would like our allocation of autonomy; we often find ourselves in tension with the the bureaucratic systems that, according to rough consensus and running code, are designed to deliver to us our measure of autonomy. Those that overstep their allocation of autonomy, such as those that participated in the most recent Capitol insurrection, are put in prison. Freedom cooexists with law and even order in sometimes uncomfortable ways. There are contests; they are often ugly at the time however much they are glorified retrospectively by their winners as a form of past-oriented validation of the status quo.

References

Hildebrandt, M. (2013). Profile transparency by design?: Re-enabling double contingency. Privacy, due process and the computational turn: The philosophy of law meets the philosophy of technology, 221-46.

Hildebrandt, M. (2015). Smart technologies and the end (s) of law: novel entanglements of law and technology. Edward Elgar Publishing.

Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

double contingency and technology

One of the best ideas to come out of the social sciences is “double contingency”: the fact that two people engaged in communication are in a sense unpredictable to each other. That mutual unpredictability is an element of what it means to be in communication with another.

The most recent articulation of this idea is from Luhmann, who was interested in society as a system of communication. Luhmann is not focused on the phenomenology of the participants in a social system; in as sense, he looks like social systems the way an analyst might look at communications data from a social media site. The social system is the set of messages. Luhmann is an interesting figure in intellectual history in part because he is the one who made the work of Maturana and Varela officially part of German philosophical canon. That’s a big deal, as Maturana and Varela’s intellectual contributions–around the idea of autopoiesis, for example–were tremendously original, powerful, and good.

“Double contingency” was also discussed, one reads, by Talcott Parsons. This does not come up often because at some point the discipline of Sociology just decided to bury Parsons.

Double contingency comes up in interesting ways in European legal scholarship about technology. Luhmann, a dense German writer, is not read much in the United States, despite his being essentially right about things. Hildebrandt (2019) uses double contingency in her perhaps perplexingly framed argument for the “incomputability” of human personhood. Teubner (2006) makes a somewhat different but related argument about agency, double contingency, and electronic agents.

Hildebrandt and Teubner make for an interesting contrast. Hildebrandt is interested in the sanctity of humanity qua humanity, and in particular of privacy defined as the freedom to be unpredictable. This is an interesting inversion for European phenomenological philosophy. Recall that originally in European phenomenology human dignity was tied to autonomy, but autonomy depended on universalized rationality, with the implication that the most important thing about human dignity was that one followed universal moral rules (Kant). Hildebrandt is almost staking out an opposite position: that Arendtian natality, the unpredictableness of being an original being at birth, is the source of one’s dignity. Paradoxically, Hildebrandt argues that it humanity has this natality essentially and so claims that predictive technology might truly know the data subject are hubris, but also that the use of these predictive technologies is threat to natality unless their use is limited by data protection laws that ensure contestability of automated decisions.

Teubner (2006) takes a somewhat broader and, in my view, more self-consistent view. Grounding his argument firmly in Luhmann and Latour, Teubner is interested in the grounds of legally recognized (as opposed to ontologically, philosophically sanctified) personhood. And, he finds, the conditions of personhood can apply to many things besides humans! “Black box, double contingency, and addressability”, three fictions on which the idea of personhood depend, can apply to corporations and electronic agents as well as humans individually. This provides a kind of consistency and rationale for why we allow these kinds of entities to engage in legal contracts with each other. The contract, it is theorized, is a way of managing uncertainty, reducing the amount of contingency in the inherent “double contingency”-laden relationship.

Something of the old Kantian position comes through in Teubner, in that contracts and the law are regulatory. However, Teubner, like Nissenbaum, is ultimately a pluralist. Teubner writes about multiple “ecologies” in which the subject is engaged, and to which they are accountable in different modalities. So, the person, qua economic agent, is addressed in terms of their preferences. But the person, qua legal institutions, is addressed in terms of their embodiment of norms. The “whole person” does not appear in any singular ecology.

I’m sympathetic with the Teubnerian view here, perhaps in contrast with Hildebrandt’s view, the the following sense: while there may indeed be some intrinsic indeterminacy to an individual, this indeterminacy is meaningless unless it is also situated in (some) social ecology. However, what makes a person contingent visa vie one ecology is precisely that only a fragment of them is available to that ecology. The contingency to the first ecology is a consequence of their simultaneous presence within other ecologies. The person is autonomous, and hence also unpredictable, because of this multiplied, fragmented identity. Teubner, I think correctly, concludes that there is a limited form of personhood to non-human agents, but as these agents will be even more fragmented than humans, they are only persons in an attenuated sense.

I’d argue that Teubner helpfully backfills how personhood is socially constructed and accomplished, as opposed to guaranteed from birth, in a way that complements Hildebrandt nicely. In the 2019 article cited here, Hildebrandt argues for contestability of automated decisions as a means of preserving privacy. Teubner’s theory suggests that personhood–as participant in double contingency, as a black box–is threatened rather by context collapse, or the subverting of the various distinct social ecologies into a single platform in which data is shared ubiquitously between services. This provides a normative a universalist defense of keeping contexts separate (which in a different article Hildebrandt connects to purpose binding in the GDPR) which is never quite accomplished in, for example, Nissenbaum’s contextual integrity.

References

Hildebrandt, Mireille. “Privacy as protection of the incomputable self: From agnostic to agonistic machine learning.” Theoretical Inquiries in Law 20.1 (2019): 83-121.

Teubner, Gunther. “Rights of non‐humans? Electronic agents and animals as new actors in politics and law.” Journal of Law and Society 33.4 (2006): 497-521.