Digifesto

Tag: luhmann

Luhmann, Social Systems, Cha. 1 § I

Niklas Luhmann (1927-1998) was a German sociologist who aimed to understand society in terms of systems theory.

I am reading Luhmann’s Social Systems (1995) because I have a hunch that this theory is relevant to my research. This post contains notes about Chapter 1, section I-II.

Often, scientists need to sacrifice intelligibility for accuracy. Luhmann is a scientist. He is unapologetic about this. He opens his “Instead of a Preface to the English Edition” (actual title) with:

“This is not an easy book. It does not accommodate those who prefer a quick and easy read, yet do not want to die without a taste of systems theory. This holds for the German text, too. If one seriously undertakes to work out a comprehensive theory of the social and strives for sufficient conceptual precision, abstraction and complexity in the conceptual infrastructure are unavoidable.”

Why bother reading such a difficult book? Why be a scientist and study social systems?

One reason to study society scientifically is to design and build better smart digital infrastructure.

Most people designing and building smart digital infrastructure today are not studying Luhmann. They are studying computer science. That makes sense: computer science is a science of smart digital artifacts. What has become increasingly apparent in recent years is that smart digital infrastructure is having an impact on society, and that the infrastructure is often mismatched to its social context. These mismatches are often considered to be a problem. Hence, a science of society might inform better technical designs.

§I

Chapter 1 opens with:

The following considerations assume that there are systems. Thus they do not begin with epistemological doubt. They also do not advocate a “purely analytical relevance” for systems theory. The most narrow interpretation of systems theory as a mere method of analyzing reality is deliberately avoided. Of course, one must never confuse statements with their objects; one must realize that statements are only statements and that scientific statements are only scientific statements. But, as least in systems theory, they refer to the real world. Thus the concept of system refers to something that is in reality a system and thereby incurs the responsibility of testing its statements against reality.

This is a great opening. It is highly uncommon for work in the social sciences to begin this way. Today, social science is almost always taught in theoretically pluralistic way. The student is taught several different theories of the same phenomenon. As they specialize into a social scientific discipline, they are taught to reproduce that discipline by citing its canonical thinkers and apply its analytical tools to whatever new phenomenon presents itself.

Not so with Luhmann. Luhmann is trying to start from a general scientific theory — systems theory — that in principle applies to physical, biological, and other systems, and to apply it to social systems. He cites Talcott Parsons, but also Herbert Simon, Ludwig von Bertalanffy, and Humberto Maturana. Luhmnn is not interested in reproducing a social scientific field; he is interested in reproducing the scientific field of systems theory in the domain of social science.

So the book is going to:

  • Be about systems theory in general
  • Address how social systems are a kind of system
  • Address how social systems relate to other kinds of system

There is a major challenge to studying this book in 2021. That challenge is that “systems theory” is not a mainstream scientific field today, and that people that do talk about “systems” normally do so in the context of “systems engineering”, to study and design industrial processes for example. They have their own quantitative discipline and methodologies that has little to do with sociology. Computer scientists, meanwhile, will talk about software systems and information systems, but normally in a way that has nothing to do with “systems theory” or systems engineering in a mechanical sense. Hazarding a guess, I would say that this has something to do with the cybernetics/AI split in the second half of the 20th century.

There is now a great deal of convergence in mathematical notation and concepts between different STEM fields, in part because much of the computational tooling has become ubiquitous. Computational social science has made great strides in recent years as a result. But many computational social science studies apply machine learning techniques to data generated by a social process, despite the fact that nobody believes the model spaces used in machine learning contain a veridical model of society.

This has led to many of the ethical and social problems with “AI”. Just for brief example, it is well known that estimating fitness via regression for employment or parole from personal information is, even when sensitive categories are excluded, likely to reproduce existing societal biases extant in the data through proxy variables in the feature set. A more subtle causal analysis can perhaps do better, but the way causality works at a societal level is not straightforward. See Lily Hu’s discussion of this topic, for some deeper analysis. Understanding the possible causal structures of society, including the possibility of “bottom-up” emergent effects and “downward causation” effects from social structures, would potentially improve the process of infrastructure design, whether manual or automated (via machine learning).

With this motive in mind, we will continue to slowly analyze and distill Luhmann in search for relevant insights.

For Luhmann, “systems theory … claims universal validity for everything that is a system.” Implicitly, systems theory has perfect internal validity. Luhmann expresses this theory in German, originally. But it really feels like there should be a mathematization of this work. He does not cite one yet, but the spoiler is that he’s eventually going to use George Spencer-Brown’s Laws of Form. For reasons I may get into later if I continue with this project, I believe that’s an unfortunate choice. I may have to find a different way to do the mathematization.

Rather, he follows through on his commitment to the existence of real systems by inferring some necessary consequences of that first principle. He is not content with a mathematical representation; systems theory must have “a real reference to the world”; “it is forced to treat itself as one of its objects in order to compare itself with others among those objects”. The crux is that systems theory, being a system itself, has to be able to take itself into account from the start. Hence, the commitment to real systems entails the realness of self-referential systems. “This means … there are systems that have the ability to establish relations with themselves and to differentiate these relations from relations with their environment.”

We are still in §I, which is itself a sort of preamble situating systems theory as a scientific theory, but already Luhmann is exposing the substance of the theory; in doing so, he demonstrates how truly self-referential — and consistently so — systems theory is. As he’ll say more definitively later, one essential feature of a system is that it is different from its environment. A system has, in effect, an “inside”, and also an “outside”. Outside the system is the environment. The part of the system that separates the inside of the system and its environment is the boundary. This binary aspect of the system (the system, and the not-the-system (environment)) clarifies the logic of ‘self-reference’. Self-referential systems differentiate between themselves and not-themselves.

So far, you have perhaps noted that Luhmann is a terribly literal writing. It is no surprise that the focus of his book, Social Systems, is that subset of systems that are “social”. What are these systems like? What makes them different from organisms (also systems), or systems of machines? Luhmann eschews metaphor — a bold choice. “[W]e do not choose the shortcut of analogy, but rather the longer path of generalization and respecification.” We don’t want to be misled by analogies.

“Above all, we will have to emphasize the nonpsychic character of social systems.”

That’s something Luhmann says right after saying he doesn’t want to use metaphors when talking about social systems. What can this possibly mean? It means, among other things, that Luhmann is not interested in anybody’s subjective experience of a society as an account of what a social system is. A “psychic system”, like my lived experience, or yours, is not the same thing as the social system — though, as we will later read, psychic systems are “structurally coupled” with the social system in important ways. Rather, the social system is constituted, objectively, by the communications between people. This makes it a more ready object of science.

It is striking to me that Luhmann is not more popular among analysts of social media data, because at least superficially he seems to be arguing, in effect, the social system of Twitter is not the system of Twitter’s users. Rather, it’s the system of the tweets. That’s one way of looking at things, for sure. Somewhat abashedly, I will say that Luhmann is an interesting lens through which to view Weird Twitter, which you may recall as a joke-telling subculture of Twitter that was popular before Former President Trump made Twitter much, much weirder. I think there’s some interesting comparisons to be drawn between Anthony Cohen’s theory of the symbolic construction of community, complete with symbolic boundary, and Luhmann’s notion of the boundary of a social system. But I digress.

Luhmann hasn’t actually used the word “communication” yet. He instead says “social contact”. “Every social contact is understood as a system, up to and including society as the inclusion of all possible contacts.” Possible contacts. Meaning that the system is defined in part by its unrealized but potential states. It can be stochastic; it can be changing its internal states to adapt to the external environment. “In other words, the general theory of social systems claims to encompass all sociology’s potential topics and, in this sense, to be a universal sociological theory.” Universal sociological theories are terribly unpopular these days. But Luhmann attempted it. Did he succeed?

“Yet, a claim to universality is not a claim to exclusive correctness, to the exclusive validity, and thus necessity (noncontingency), of one’s own account.” Nobody claiming to have a universal theory does this. Indeed, a theory learns about its own contingency through self-reference. So, social systems theory discovers it European origins, for example, as soon as it considers itself. What then? At that point, one “distinguish[es] between claims of universality and claims to exclusivity”, which makes utter sense, or “by recognizing that structural contingencies must be employed as an operative necessity, with the consequence that there is a constant contingency absorbtion through the successes, practices, and commitments in the scientific system.”

Contingency absorbtion is a nice idea. It is perhaps associated with the idea of abstraction: as one accumulates contingent experiences and abstracts from them, one discovers necessary generalities which are true for all contingent experiences. This has been the core German philosophical method for centuries, and it is quite powerful. We seem to have completely forgotten it in the American academic system. That is why the computer scientists have taken over everything. They have a better universalizing science than the sociologists do. Precisely for that reason, we are seeing computational systems in constant and irksome friction with society. American sociologists need to stop insisting on theoretical pluralism and start developing a universal sociology that is competitive, in terms of its universality, with computer science, or else we will never get smart infrastructure and AI ethics right.

References

Luhmann, N. (1995). Social systems. Stanford University Press.

Hildebrandt (2013) on double contingency in Parsons and Luhmann

I’ve tried to piece together double contingency before, and am finding myself re-encountering these ideas in several projects. I just now happened on this very succinct account of double contingency in Hildebrandt (2013), which I wanted to reproduce here.

Parsons was less interested in personal identity than in the construction of social institutions as proxies for the coordination of human interaction. His point is that the uncertainty that is inherent in the double contingency requires the emergence of social structures that develop a certain autonomy and provide a more stable object for the coordination of human interaction. The circularity that comes with the double contingency is thus resolved in the consensus that is consolidated in sociological institutions that are typical for a particular culture. Consensus on the norms and values that regulate human interaction is Parsons’s solution to the problem of double contingency, and thus explains the existence of social institutions. As could be expected, Parsons’s focus on consensus and his urge to resolve the contingency have been criticized for its ‘past-oriented, objectivist and reified concept of culture’, and for its implicitly negative understanding of the double contingency.

This paragraph says a lot, both about “the problem” posed by “the double contingency”, the possibility of solution through consensus around norms and values, and the rejection of Parsons. It is striking that in the first pages of this article, Hildebrandt begins by challenging “contextual integrity” as a paradigm for privacy (a nod, if not a direct reference, to Nissenbaum (2009)), astutely pointing out that this paradigm makes privacy a matter of delinking data so that it is not reused across contexts. Nissenbaum’s contextual integrity theory depends rather critically on consensus around norms and values; the appropriateness of information norms is a feature of sociological institutions accountable ultimately to shared values. The aim of Parsons, and to some extent also Nissenbaum, is to remove the contingency by establishing reliable institutions.

The criticism of Parsons as being ‘past-oriented, objectivist and reified’ is striking. It opens the question whether Parsons’s concept of culture is too past-oriented, or if some cultures, more than others, may be more past-oriented, rigid, or reified. Consider a continuum of sociological institutions ranging from the rigid, formal, bureaucratized, and traditional to the flexible, casual, improvisational, and innovative. One extreme of these cultures is better conceptualized as “past-oriented” than the other. Furthermore, when cultural evolution becomes embedded in infrastructure, no doubt that culture is more “reified” not just conceptually, but actually, via its transformation into durable and material form. That Hildebrandt offers this criticism of Parsons perhaps foreshadows her later work about the problems of smart information communication infrastructure (Hildebrandt, 2015). Smart infrastructure poses, to those which this orientation, a problem in that it reduces double contingency by being, in fact, a reification of sociological institutions.

“Reification” is a pejorative word in sociology. It refers to a kind of ideological category error with unfortunate social consequences. The more positive view of this kind of durable, even material, culture would be found in Habermas, who would locate legitimacy precisely in the process of consensus. For Habermas, the ideals of legitimate consensus through discursively rational communicative actions finds its imperfect realization in the sociological institution of deliberative democratic law. This is the intellectual inheritor of Kant’s ideal of “perpetual peace”. It is, like the European Union, supposed to be a good thing.

So what about Brexit, so to speak?

Double contingency returns with a vengeance in Luhmann, who famously “debated” Habermas (a more true follower of Parsons), and probably won that debate. Hildebrandt (2013) discusses:

A more productive understanding of double contingency may come from Luhmann (1995), who takes a broader view of contingency; instead of merely defining it in terms of dependency he points to the different options open to subjects who can never be sure how their actions will be interpreted. The uncertainty presents not merely a problem but also a chance; not merely a constraint but also a measure of freedom. The freedom to act meaningfully is constraint [sic] by earlier interactions, because they indicate how one’s actions have been interpreted in the past and thus may be interpreted in the future. Earlier interactions weave into Luhmann’s (1995) emergent social systems, gaining a measure of autonomy — or resistance — with regard to individual participants. Ultimately, however, social systems are still rooted in double contingency of face-to-face communication. The constraints presented by earlier interactions and their uptake in a social system can be rejected and renegotiated in the process of anticipation. By figuring out how one’s actions are mapped by the other, or by social systems in which one participates, room is created to falsify expectations and to disrupt anticipations. This will not necessarily breed anomy, chaos or anarchy, but may instead provide spaces for contestation, self-definition in defiance of labels provided by the expectations of others, and the beginnings of novel or transformed social institutions. As such, the uncertainty inherent in the double contingency defines human autonomy and human identity as relational and even ephemeral, always requiring vigilance and creative invention in the face of unexpected or unreasonably constraining expectations.

Whereas Nissenbaum’s theory of privacy is “admitted conservative”, Hildebrandt’s is grounded in a defense of freedom, invention, and transformation. If either Nissenbaum or Hildebrandt were more inclined to contest each other directly, this may be privacy scholarship’s equivalent of the Habermas/Luhmann debate. However, this is unlikely to occur because the two scholars operate in different legal systems, reducing the stakes of the debate.

We must assume that Hildebrandt, in 2013, would have approved of Brexit, the ultimate defiance of labels and expectations against a Habermasian bureaucratic consensus. Perhaps she also, as would be consistent with this view, has misgivings about the extraterritorial enforcement of the GDPR. Or maybe she would prefer a a global bureaucratic consensus that agreed with Luhmann; but this is a contradiction. This psychologistic speculation is no doubt unproductive.

What is more productive is the pursuit of a synthesis between these poles. As a liberal society, we would like our allocation of autonomy; we often find ourselves in tension with the the bureaucratic systems that, according to rough consensus and running code, are designed to deliver to us our measure of autonomy. Those that overstep their allocation of autonomy, such as those that participated in the most recent Capitol insurrection, are put in prison. Freedom cooexists with law and even order in sometimes uncomfortable ways. There are contests; they are often ugly at the time however much they are glorified retrospectively by their winners as a form of past-oriented validation of the status quo.

References

Hildebrandt, M. (2013). Profile transparency by design?: Re-enabling double contingency. Privacy, due process and the computational turn: The philosophy of law meets the philosophy of technology, 221-46.

Hildebrandt, M. (2015). Smart technologies and the end (s) of law: novel entanglements of law and technology. Edward Elgar Publishing.

Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

A brief revisit of the Habermas/Luhmann debate

I’ve gotten into some arguments with friends recently about the philosophy of science. I’m also finding myself working these days, yet again, at a disciplinary problem. By which I mean, the primary difficulty of the research questions I’m asking at the moment is that there is no discipline that in its primary self-understanding asks those questions.

This and the coronavirus emergency have got me thinking, “What ever happened to the Habermas/Luhmann debate?” It is a good time to consider this problem because it’s one that’s likely to minimize my interactions with other people at a time when this one’s civic duty.

I refer to Rasch (1991) for an account of it. Here is a good paragraph summarizing some of the substance of the debate.

It is perhaps in this way that Luhmann can best be distinguished from Habermas. The whole movement of Habermas’s thought tends to some final resting place, prescriptively in the form of consensus as the legitimate basis for social order, and methodologically in the form of a normative underlying simple structure which is said to dictate the proper shape of surface complexity. But for Luhmann, complexity does not register the limits of human knowledge as if those limits could be overcome or compensated for by the reconstruction of some universal rule-making process. Rather, complexity, defined as the paradoxical task of solving a solved problem that cannot be solved, or only provisionally solved, or only solved by creating new problems, is the necessary ingredient for human intellectual endeavors. Complexity always remains complex and serves as a self-replenishing reservoir of possibilities (1981, 203-4). Simply put, complexity is limited understanding. It is the missing information which makes it impossible to comprehend a system fully (1985, 50-51; 1990, 81), but the absence of that information is absolutely unavoidable and paradoxically essential for the further evolution of complexity.

Rasch, 1991

In other words, Habermas believes that it’s possible, in principle, to reach a consensus around social order that is self-legitimizing and has at its core a simple, even empty, observer’s stance. This is accomplished through rational communicative action. Luhmann, on the other hand, sees the fun-house of perspectivalist warped mirrors and no such fixed point or epistemological attractor state.

But there’s another side to this debate which is not discussed so much in the same context. Habermas, by positing a communicative rationality capable of legitimization, is able to identify the obstacles to it: the “steering media”, money and power (Habermas, 1987). Whereas Luhmann understands a “social system” to be constituted by the communication within it. A social system is defined as the sum total of its speech, writing, and so on.

This has political implications. Rasch concludes:

With that in mind, one final paradox needs to be mentioned. Although Habermas is the self-identified leftist and social critic, and although Habermas sees in Luhmann and in systems theory a form of functionalist conservatism, it may very well be to Luhmann that future radical theorists will have to turn. Social and political theorists who are socially and politically committed need not continue to take theoretical concern with complexity as a sign of apathy, resignation, or conformism.’9 As Harlan Wilson notes, the “invocation of ‘complexity’ for the purpose of devaluing general political and social theory and of creating suspicion of all varieties of general political theory in contemporary political studies is to be resisted.” It is true that the increased consciousness of complexity brings along with it the realization that “total comprehension” and “absence of distortion” are unattainable, but, Wilson continues, “when that has been admitted, it remains that only general theoretical reflection, together with a sense of history, enables us to think through the meaning of our complex social world in a systematic way” (1975, 331). The only caveat is that such “thinking through” will have to be done on the level of complexity itself and will have to recognize that theories of social complexity are part of the social complexity they investigate. It is in this way that the ability to respond to social complexity in a complex manner will continue to evolve along with the social complexity that theory tries to understand

Rasch, 1991

One reason that Habermas is able to make a left-wing critique, whereas Luhmann can correctly be accused of being a functionalist conservative, is that Habermas’s normative stance has an irrational materialist order (perhaps what is “right wing” today) as its counterpoint. Whereas Luhmann, in asserting that social systems exist only as functional stability, does not seem to have money, power, or ultimately the violence they depend on in his ontology. It is a conservative view not because his theory lacks normativity, but because his descriptive stance is, at the end of the day, incomplete. Luhmann has no way of reckoning with the ways infrastructural power (Mann, 2008) exerts a passive external force on social systems. In other words, social systems evolve, but in an environment created by the material consequences of prior social systems, which reveal themselves as distributions of capital. This is what it means to be in the Anthropocene.

During an infrastructural crisis, such as a global pandemic in which the violence of nature threatens objectified human labor and the material supply chains that depend on it, society, which often in times of “peace” happy to defer to “cultural” experts whose responsibility is the maintenance of ideology, defers to different experts: the epidemiologists, the operations research experts, the financial analysts. These are the occupational “social scientists” who have no need of the defensiveness of the historian, the sociologist, the anthropologist, or the political scientist. They are deployed, sometimes in the public interest, to act on their operationally valid scientific consensus. And precisely because the systems that concern them are invisible to the naked eye (microbes, social structure, probabilities) the uncompromising, atheoretical empiricism that has come to be the proud last stand of the social sciences cannot suffice. Here, theory–an accomplishment of rationality, its response to materialist power–must shine.

The question, as always, is not whether there can be progress based on a rational simplification, but to what extent and economy supports the institutions that create and sustain such a perspective, expertise, and enterprise.

References

Habermas, Jürgen. “The theory of communicative action, Volume 2: Lifeworld and system.” Polity, Cambridge (1987).

Mann, Michael. “Infrastructural power revisited.” Studies in comparative international development 43.3-4 (2008): 355.

Rasch, William. “Theories of complexity, complexities of theory: Habermas, Luhmann, and the study of social systems.” German Studies Review 14.1 (1991): 65-83.

late modern social epistemology round up; technical vs. hermeneutical correctness

Consider on the one hand what we might call Habermasian transcendental pragmatism, according to which knowledge can be categorized by how it addresses one of several generalized human interests:

  • The interest of power over nature or other beings, being technical knowledge
  • The interest of agreement with others for the sake of collective action, being hermeneutic knowledge
  • The interest of emancipation from present socially imposed conditions, being critical or reflexive knowledge

Consider in contrast what we might call the Luhmann or Foucault model in which knowledge is created via system autopoeisis. Luhmann talks about autopoeisis in a social system; Foucault talks about knowledge in a system of power much the same way.

It is difficult to reconcile these views. This may be what was at the heart of the Habermas-Luhmann debate. Can we parse out the problem in any way that helps reconcile these views?

First, let’s consider the Luhmann view. We might ease the tension in it by naming what we’ve called “knowledge” something like “belief”, removing the implication that the belief is true. Because indeed autopoeisis is a powerful enough process that it seems like it would preserve all kinds of myths and errors should they be important to the survival of the system in which they circulate.

This picture of knowledge, which we might call evolutionary or alternately historicist, is certainly a relativist one. At the intersection of institutions within which different partial perspectives are embedded, we are bound to see political contest.

In light of this, Habermas’s categorization of knowledge as what addresses generalized human interests can be seen as a way of identifying knowledge that transcends particular social systems. There is a normative component of this theory–knowledge should be such a thing. But there is also a descriptive component. One predicts, under Habermas’s hypothesis, that the knowledge that survives political contest at the intersection of social systems is that which addresses generalized interests.

Something I have perhaps overlooked in the past is the importance of the fact that there are multiple and sometimes contradictory general interests. One persistent difficulty in the search for truth is the conflict between what is technically correct and what is hermeneutically correct.

If a statement or theory is technically correct, then it can be reliably used by agents to predict and control the world. The objects of this prediction and control can be objects, or they can be other agents.

If a statement or theory is hermeneutically correct, then it is the reliable consensus of agents involved in a project of mutual understanding and respect. Hermeneutically correct beliefs might stress universal freedom and potential, a narrative of shared history, and a normative goal of progress against inequality. Another word for ‘hermeneutic’ might be ‘political’. Politically correct knowledges are those shared beliefs without which the members of a polity would not be able to stand each other.

In everyday discourse we can identify many examples of statements that are technically correct but hermeneutically (or politically) incorrect, and vice versa. I will not enumerate them here. In these cases, the technically correct view is identified as “offensive” because in a sense it is a defection from a voluntary social contract. Hermeneutic correctness binds together a particular social system by capturing what participants must agree upon in order for all to safely participate. For a member of that social system to assert their own agency over others, to identify ways in which others may be predicted and controlled without their consent or choice in the matter, is disrespectful. Persistent disrespect results in the ejection of the offender from the polity. (c.f. Pasquale’s distinction between “California engineers and New York quants” and “citizens”.)

A cruel consequence of these dynamics is social stratification based on the accumulation of politically forbidden technical knowledge.

We can tell this story again and again: A society is bound together by hermeneutically stable knowledge–an ideology, perhaps. Somebody ‘smart’ begins experimentation and identifies a technical truth that is hermeneutically incorrect, meaning that if the idea were to spread it would erode the consensus on which the social system depends. Perhaps the new idea degrades others by revealing that something believed to be an act of free will is, in fact, determined by nature. Perhaps the new idea is inaccessible to others because it depends on some rare capacity. In any case, it cannot be willfully consented to by the others.

The social system begins to have an immune reaction. Society has seen this kind of thing before. Historically, this idea has lead to abuse, exploitation, infamy. Those with forbidden knowledge should be shunned, distrusted, perhaps punished. Those with disrespectful technical ideas are discouraged from expressing them.

Technical knowledge thereby becomes socially isolated. Seeking out its own, it becomes concentrated. Already shunned by society, the isolated technologists put their knowledge to use. They gain advantage. Revenge is had by the nerds.

formalizing the cultural observer

I’m taking a brief break from Horkheimer because he is so depressing and because I believe the second half of Eclipse of Reason may include new ideas that will take energy to internalize.

In the meantime, I’ve rediscovered Soren Brier’s Cybersemiotics: Why Information Is Not Enough! (2008), which has remained faithfully on my desk for months.

Brier is concerned with the possibility of meaning generally, and attempts to synthesize the positions of Pierce (recall: philosophically disliked by Horkheimer as a pragmatist), Wittgenstein (who first was an advocate of the formalization of reason and language in his Tractatus, then turned dramatically against it in his Philosophical Investigations), second-order cyberneticists like Varela and Maturana, and the social theorist Niklas Luhmann.

Brier does not make any concessions to simplicity. Rather, his approach is to begin with the simplest theories of communication (Shannon) and show where each fails to account for a more complex form of interaction between more completely defined organisms. In this way, he reveals how each simpler form of communication is the core around which a more elaborate form of meaning-making is formed. He finally arrives at a picture of meaning-making that encompasses all of reality, including that which can be scientifically understood, but one that is necessarily incomplete and an open system. Meaning is all-pervading but never all-encompassing.

One element that makes meaning more complex than simple Shannon-esque communication is the role of the observer, who is maintained semiotically through an accomplishment of self-reference through time. This observer is a product of her own contingency. The language she uses is the result of nature, AND history, AND her own lived life. There is a specificity to her words and meanings that radiates outward as she communicates, meanings that interact in cybernetic exchange with the specific meanings of other speakers/observers. Language evolves in an ecology of meaning that can only poorly be reflected back upon the speaker.

What then can be said of the cultural observer, who carefully gathers meanings, distills them, and expresses new ones conclusively? She is a cybernetic captain, steering the world in one way or another, but only the world she perceives and conceives. Perhaps this is Haraway’s cyborg, existing in time and space through a self-referential loop, reinforced by stories told again and again: “I am this, I am this, I am this.” It is by clinging to this identity that the cyborg achieves the partiality glorified by Haraway. It is also this identity that positions her as an antagonist as she must daily fight the forces of entropy that would dissolve her personality.

Built on cybernetic foundations, does anything in principle prevent the formalization and implementation of Brier’s semiotic logic? What would a cultural observer that stands betwixt all cultures, looming like a spider on the webs of communication that wrap the earth at inconceivable scale? Without the same constraints of partiality of one human observer, belonging to one culture, what could such a robot scientist see? What meaning would they make for themselves or intend?

This is not simply an issue of the interpretability of the algorithms used by such a machine. More deeply, it is the problem that these machines do not speak for themselves. They have no self-reference or identity, and so do not participate in meaning-making except instrumentally as infrastructure. This cultural observer that is in the position to observe culture in the making without the limits of human partiality for now only serves to amplify signal or dampen noise. The design is incomplete.

Privacy, trust, context, and legitimate peripheral participation

Privacy is important. For Nissenbaum, what’s essential to privacy is control over context. But what is context?

Using Luhmann’s framework of social systems–ignoring for a moment e.g. Habermas’ criticism and accepting the naturalized, systems theoretic understanding of society–we would have to see a context as a subsystem of the total social system. In so far as the social system is constituted by many acts of communication–let’s visualize this as a network of agents, whose edges are acts of communication–then a context is something preserved by configurations of agents and the way they interact.

Some of the forces that shape a social system will be exogenous. A river dividing two cities or, more abstractly, distance. In the digital domain, the barriers of interoperability between one virtual community infrastructure and another.

But others will be endogenous, formed from the social interactions themselves. An example is the gradual deepening of trust between agents based on a history of communication. Perhaps early conversations are formal, stilted. Later, an agent takes a risk, sharing something more personal–more private? It is reciprocated. Slowly, a trust bond, an evinced sharing of interests and mutual investment, becomes the foundation of cooperation. The Prisoner’s Dilemma is solved the old fashioned way.

Following Carey’s logic that communication as mere transmission when sustained over time becomes communication as ritual and the foundation of community, we can look at this slow process of trust formation as one of the ways that a context, in Nissenbaum’s sense, perhaps, forms. If Anne and Betsy have mutually internalized each others interests, then information flow between them will by and large support the interests of the pair, and Betsy will have low incentives to reveal private information in a way that would be detrimental to Anne.

Of course this is a huge oversimplification in lots of ways. One way is that it does not take into account the way the same agent may participant in many social roles or contexts. Communication is not a single edge from one agent to another in many circumstances. Perhaps the situation is better represented as a hypergraph. One reason why this whole domain may be so difficult to reason about is the sheer representational complexity of modeling the situation. It may require the kind of mathematical sophistication used by quantum physicists. Why not?

Not having that kind of insight into the problem yet, I will continue to sling what the social scientists call ‘theory’. Let’s talk about an exisiting community of practice, where the practice is a certain kind of communication. A community of scholars. A community of software developers. Weird Twitter. A backchannel mailing list coordinating a political campaign. A church.

According to Lave and Wenger, the way newcomers gradually become members and oldtimers of a community of practice is legitimate peripheral participation. This is consistent with the model described above characterizing the growth of trust through gradually deepening communication. Peripheral participation is low-risk. In an open source context, this might be as simple as writing a question to the mailing list or filing a bug report. Over time, the agent displays good faith and competence. (I’m disappointed to read just now that Wenger ultimately abandoned this model in favor of a theory of dualities. Is that a Hail Mary for empirical content for the theory? Also interested to follow links on this topic to a citation of von Krogh 1998, whose later work found its way onto my Open Collaboration and Peer Production syllabus. It’s a small world.

I’ve begun reading as I write this fascinating paper by Hildreth and Kimble 2002 and am now have lost my thread. Can I recover?)

Some questions:

  • Can this process of context-formation be characterized empirically through an analysis of e.g. the timing dynamics of communication (c.f. Thomas Maillart’s work)? If so, what does that tell us about the design of information systems for privacy?
  • What about illegitimate peripheral participation? Arguably, this blog is that kind of participation–it participates in a form of informal, unendorsed quasi-scholarship. It is a tool of context and disciplinary collapse. Is that a kind of violation of privacy? Why not?

responding to @npdoty on ethics in engineering

Nick Doty wrote a thorough and thoughtful response to my earlier post about the Facebook research ethics problem, correcting me on a number of points.

In particular, he highlights how academic ethicists like Floridi and Nissenbaum have an impact on industry regulation. It’s worth reading for sure.

Nick writes from an interesting position. Since he works for the W3C himself, he is closer to the policy decision makers on these issues. I think this, as well as his general erudition, give him a richer view of how these debates play out. Contrast that with the debate that happens for public consumption, which is naturally less focused.

In trying to understand scholarly work on these ethical and political issues of technology, I’m struck by how differences in where writers and audiences are coming from lead to communication breakdown. The recent blast of popular scholarship about ‘algorithms’, for example, is bewildering to me. I had the privilege of learning what an algorithm was fairly early. I learned about quicksort in an introductory computing class in college. While certainly an intellectual accomplishment, quicksort is politically quite neutral.

What’s odd is how certain contemporary popular scholarship seeks to introduce an unknowing audience to algorithms not via their basic properties–their pseudocode form, their construction from more fundamental computing components, their running time–but for their application in select and controversial contexts. Is this good for the public education? Or is this capitalizing on the vagaries of public attention?

My democratic values are being sorely tested by the quality of public discussion on matters like these. I’m becoming more content with the fact that in reality, these decisions are made by self-selecting experts in inaccessible conversations. To hope otherwise is to downplay the genuine complexity of technical problems and the amount of effort it takes to truly understand them.

But if I can sit complacently with my own expertise, this does not seem like a political solution. The FCC’s willingness to accept public comment, which normally does not elicit the response of a mass action, was just tested by Net Neutrality activists. I see from the linked article that other media-related requests for comments were similarly swamped.

The crux, I believe, is the self-referential nature of the problem–that the mechanics of information flow among the public are both what’s at stake (in terms of technical outcomes) and what drives the process to begin with, when it’s democratic. This is a recipe for a chaotic process. Perhaps there are no attractor or steady states.

Following Rash’s analysis of Habermas and Luhmann’s disagreement as to the fate of complex social systems, we’ve got at least two possible outcomes for how these debates play out. On the one hand, rationality may prevail. Genuine interlocutors, given enough time and with shared standards of discourse, can arrive at consensus about how to act–or, what technical standards to adopt, or what patches to accept into foundational software. On the other hand, the layering of those standards on top of each other, and the reaction of users to them as they build layers of communication on top of the technical edifice, can create further irreducible complexity. With that complexity comes further ethical dilemmas and political tensions.

A good desideratum for a communications system that is used to determine the technicalities of its own design is that its algorithms should intelligently manage the complexity of arriving at normative consensus.