Digifesto

Tag: luhmann

late modern social epistemology round up; technical vs. hermeneutical correctness

Consider on the one hand what we might call Habermasian transcendental pragmatism, according to which knowledge can be categorized by how it addresses one of several generalized human interests:

  • The interest of power over nature or other beings, being technical knowledge
  • The interest of agreement with others for the sake of collective action, being hermeneutic knowledge
  • The interest of emancipation from present socially imposed conditions, being critical or reflexive knowledge

Consider in contrast what we might call the Luhmann or Foucault model in which knowledge is created via system autopoeisis. Luhmann talks about autopoeisis in a social system; Foucault talks about knowledge in a system of power much the same way.

It is difficult to reconcile these views. This may be what was at the heart of the Habermas-Luhmann debate. Can we parse out the problem in any way that helps reconcile these views?

First, let’s consider the Luhmann view. We might ease the tension in it by naming what we’ve called “knowledge” something like “belief”, removing the implication that the belief is true. Because indeed autopoeisis is a powerful enough process that it seems like it would preserve all kinds of myths and errors should they be important to the survival of the system in which they circulate.

This picture of knowledge, which we might call evolutionary or alternately historicist, is certainly a relativist one. At the intersection of institutions within which different partial perspectives are embedded, we are bound to see political contest.

In light of this, Habermas’s categorization of knowledge as what addresses generalized human interests can be seen as a way of identifying knowledge that transcends particular social systems. There is a normative component of this theory–knowledge should be such a thing. But there is also a descriptive component. One predicts, under Habermas’s hypothesis, that the knowledge that survives political contest at the intersection of social systems is that which addresses generalized interests.

Something I have perhaps overlooked in the past is the importance of the fact that there are multiple and sometimes contradictory general interests. One persistent difficulty in the search for truth is the conflict between what is technically correct and what is hermeneutically correct.

If a statement or theory is technically correct, then it can be reliably used by agents to predict and control the world. The objects of this prediction and control can be objects, or they can be other agents.

If a statement or theory is hermeneutically correct, then it is the reliable consensus of agents involved in a project of mutual understanding and respect. Hermeneutically correct beliefs might stress universal freedom and potential, a narrative of shared history, and a normative goal of progress against inequality. Another word for ‘hermeneutic’ might be ‘political’. Politically correct knowledges are those shared beliefs without which the members of a polity would not be able to stand each other.

In everyday discourse we can identify many examples of statements that are technically correct but hermeneutically (or politically) incorrect, and vice versa. I will not enumerate them here. In these cases, the technically correct view is identified as “offensive” because in a sense it is a defection from a voluntary social contract. Hermeneutic correctness binds together a particular social system by capturing what participants must agree upon in order for all to safely participate. For a member of that social system to assert their own agency over others, to identify ways in which others may be predicted and controlled without their consent or choice in the matter, is disrespectful. Persistent disrespect results in the ejection of the offender from the polity. (c.f. Pasquale’s distinction between “California engineers and New York quants” and “citizens”.)

A cruel consequence of these dynamics is social stratification based on the accumulation of politically forbidden technical knowledge.

We can tell this story again and again: A society is bound together by hermeneutically stable knowledge–an ideology, perhaps. Somebody ‘smart’ begins experimentation and identifies a technical truth that is hermeneutically incorrect, meaning that if the idea were to spread it would erode the consensus on which the social system depends. Perhaps the new idea degrades others by revealing that something believed to be an act of free will is, in fact, determined by nature. Perhaps the new idea is inaccessible to others because it depends on some rare capacity. In any case, it cannot be willfully consented to by the others.

The social system begins to have an immune reaction. Society has seen this kind of thing before. Historically, this idea has lead to abuse, exploitation, infamy. Those with forbidden knowledge should be shunned, distrusted, perhaps punished. Those with disrespectful technical ideas are discouraged from expressing them.

Technical knowledge thereby becomes socially isolated. Seeking out its own, it becomes concentrated. Already shunned by society, the isolated technologists put their knowledge to use. They gain advantage. Revenge is had by the nerds.

Advertisements

formalizing the cultural observer

I’m taking a brief break from Horkheimer because he is so depressing and because I believe the second half of Eclipse of Reason may include new ideas that will take energy to internalize.

In the meantime, I’ve rediscovered Soren Brier’s Cybersemiotics: Why Information Is Not Enough! (2008), which has remained faithfully on my desk for months.

Brier is concerned with the possibility of meaning generally, and attempts to synthesize the positions of Pierce (recall: philosophically disliked by Horkheimer as a pragmatist), Wittgenstein (who first was an advocate of the formalization of reason and language in his Tractatus, then turned dramatically against it in his Philosophical Investigations), second-order cyberneticists like Varela and Maturana, and the social theorist Niklas Luhmann.

Brier does not make any concessions to simplicity. Rather, his approach is to begin with the simplest theories of communication (Shannon) and show where each fails to account for a more complex form of interaction between more completely defined organisms. In this way, he reveals how each simpler form of communication is the core around which a more elaborate form of meaning-making is formed. He finally arrives at a picture of meaning-making that encompasses all of reality, including that which can be scientifically understood, but one that is necessarily incomplete and an open system. Meaning is all-pervading but never all-encompassing.

One element that makes meaning more complex than simple Shannon-esque communication is the role of the observer, who is maintained semiotically through an accomplishment of self-reference through time. This observer is a product of her own contingency. The language she uses is the result of nature, AND history, AND her own lived life. There is a specificity to her words and meanings that radiates outward as she communicates, meanings that interact in cybernetic exchange with the specific meanings of other speakers/observers. Language evolves in an ecology of meaning that can only poorly be reflected back upon the speaker.

What then can be said of the cultural observer, who carefully gathers meanings, distills them, and expresses new ones conclusively? She is a cybernetic captain, steering the world in one way or another, but only the world she perceives and conceives. Perhaps this is Haraway’s cyborg, existing in time and space through a self-referential loop, reinforced by stories told again and again: “I am this, I am this, I am this.” It is by clinging to this identity that the cyborg achieves the partiality glorified by Haraway. It is also this identity that positions her as an antagonist as she must daily fight the forces of entropy that would dissolve her personality.

Built on cybernetic foundations, does anything in principle prevent the formalization and implementation of Brier’s semiotic logic? What would a cultural observer that stands betwixt all cultures, looming like a spider on the webs of communication that wrap the earth at inconceivable scale? Without the same constraints of partiality of one human observer, belonging to one culture, what could such a robot scientist see? What meaning would they make for themselves or intend?

This is not simply an issue of the interpretability of the algorithms used by such a machine. More deeply, it is the problem that these machines do not speak for themselves. They have no self-reference or identity, and so do not participate in meaning-making except instrumentally as infrastructure. This cultural observer that is in the position to observe culture in the making without the limits of human partiality for now only serves to amplify signal or dampen noise. The design is incomplete.

Privacy, trust, context, and legitimate peripheral participation

Privacy is important. For Nissenbaum, what’s essential to privacy is control over context. But what is context?

Using Luhmann’s framework of social systems–ignoring for a moment e.g. Habermas’ criticism and accepting the naturalized, systems theoretic understanding of society–we would have to see a context as a subsystem of the total social system. In so far as the social system is constituted by many acts of communication–let’s visualize this as a network of agents, whose edges are acts of communication–then a context is something preserved by configurations of agents and the way they interact.

Some of the forces that shape a social system will be exogenous. A river dividing two cities or, more abstractly, distance. In the digital domain, the barriers of interoperability between one virtual community infrastructure and another.

But others will be endogenous, formed from the social interactions themselves. An example is the gradual deepening of trust between agents based on a history of communication. Perhaps early conversations are formal, stilted. Later, an agent takes a risk, sharing something more personal–more private? It is reciprocated. Slowly, a trust bond, an evinced sharing of interests and mutual investment, becomes the foundation of cooperation. The Prisoner’s Dilemma is solved the old fashioned way.

Following Carey’s logic that communication as mere transmission when sustained over time becomes communication as ritual and the foundation of community, we can look at this slow process of trust formation as one of the ways that a context, in Nissenbaum’s sense, perhaps, forms. If Anne and Betsy have mutually internalized each others interests, then information flow between them will by and large support the interests of the pair, and Betsy will have low incentives to reveal private information in a way that would be detrimental to Anne.

Of course this is a huge oversimplification in lots of ways. One way is that it does not take into account the way the same agent may participant in many social roles or contexts. Communication is not a single edge from one agent to another in many circumstances. Perhaps the situation is better represented as a hypergraph. One reason why this whole domain may be so difficult to reason about is the sheer representational complexity of modeling the situation. It may require the kind of mathematical sophistication used by quantum physicists. Why not?

Not having that kind of insight into the problem yet, I will continue to sling what the social scientists call ‘theory’. Let’s talk about an exisiting community of practice, where the practice is a certain kind of communication. A community of scholars. A community of software developers. Weird Twitter. A backchannel mailing list coordinating a political campaign. A church.

According to Lave and Wenger, the way newcomers gradually become members and oldtimers of a community of practice is legitimate peripheral participation. This is consistent with the model described above characterizing the growth of trust through gradually deepening communication. Peripheral participation is low-risk. In an open source context, this might be as simple as writing a question to the mailing list or filing a bug report. Over time, the agent displays good faith and competence. (I’m disappointed to read just now that Wenger ultimately abandoned this model in favor of a theory of dualities. Is that a Hail Mary for empirical content for the theory? Also interested to follow links on this topic to a citation of von Krogh 1998, whose later work found its way onto my Open Collaboration and Peer Production syllabus. It’s a small world.

I’ve begun reading as I write this fascinating paper by Hildreth and Kimble 2002 and am now have lost my thread. Can I recover?)

Some questions:

  • Can this process of context-formation be characterized empirically through an analysis of e.g. the timing dynamics of communication (c.f. Thomas Maillart’s work)? If so, what does that tell us about the design of information systems for privacy?
  • What about illegitimate peripheral participation? Arguably, this blog is that kind of participation–it participates in a form of informal, unendorsed quasi-scholarship. It is a tool of context and disciplinary collapse. Is that a kind of violation of privacy? Why not?

responding to @npdoty on ethics in engineering

Nick Doty wrote a thorough and thoughtful response to my earlier post about the Facebook research ethics problem, correcting me on a number of points.

In particular, he highlights how academic ethicists like Floridi and Nissenbaum have an impact on industry regulation. It’s worth reading for sure.

Nick writes from an interesting position. Since he works for the W3C himself, he is closer to the policy decision makers on these issues. I think this, as well as his general erudition, give him a richer view of how these debates play out. Contrast that with the debate that happens for public consumption, which is naturally less focused.

In trying to understand scholarly work on these ethical and political issues of technology, I’m struck by how differences in where writers and audiences are coming from lead to communication breakdown. The recent blast of popular scholarship about ‘algorithms’, for example, is bewildering to me. I had the privilege of learning what an algorithm was fairly early. I learned about quicksort in an introductory computing class in college. While certainly an intellectual accomplishment, quicksort is politically quite neutral.

What’s odd is how certain contemporary popular scholarship seeks to introduce an unknowing audience to algorithms not via their basic properties–their pseudocode form, their construction from more fundamental computing components, their running time–but for their application in select and controversial contexts. Is this good for the public education? Or is this capitalizing on the vagaries of public attention?

My democratic values are being sorely tested by the quality of public discussion on matters like these. I’m becoming more content with the fact that in reality, these decisions are made by self-selecting experts in inaccessible conversations. To hope otherwise is to downplay the genuine complexity of technical problems and the amount of effort it takes to truly understand them.

But if I can sit complacently with my own expertise, this does not seem like a political solution. The FCC’s willingness to accept public comment, which normally does not elicit the response of a mass action, was just tested by Net Neutrality activists. I see from the linked article that other media-related requests for comments were similarly swamped.

The crux, I believe, is the self-referential nature of the problem–that the mechanics of information flow among the public are both what’s at stake (in terms of technical outcomes) and what drives the process to begin with, when it’s democratic. This is a recipe for a chaotic process. Perhaps there are no attractor or steady states.

Following Rash’s analysis of Habermas and Luhmann’s disagreement as to the fate of complex social systems, we’ve got at least two possible outcomes for how these debates play out. On the one hand, rationality may prevail. Genuine interlocutors, given enough time and with shared standards of discourse, can arrive at consensus about how to act–or, what technical standards to adopt, or what patches to accept into foundational software. On the other hand, the layering of those standards on top of each other, and the reaction of users to them as they build layers of communication on top of the technical edifice, can create further irreducible complexity. With that complexity comes further ethical dilemmas and political tensions.

A good desideratum for a communications system that is used to determine the technicalities of its own design is that its algorithms should intelligently manage the complexity of arriving at normative consensus.