Digifesto

Category: philosophy

double contingency and technology

One of the best ideas to come out of the social sciences is “double contingency”: the fact that two people engaged in communication are in a sense unpredictable to each other. That mutual unpredictability is an element of what it means to be in communication with another.

The most recent articulation of this idea is from Luhmann, who was interested in society as a system of communication. Luhmann is not focused on the phenomenology of the participants in a social system; in as sense, he looks like social systems the way an analyst might look at communications data from a social media site. The social system is the set of messages. Luhmann is an interesting figure in intellectual history in part because he is the one who made the work of Maturana and Varela officially part of German philosophical canon. That’s a big deal, as Maturana and Varela’s intellectual contributions–around the idea of autopoiesis, for example–were tremendously original, powerful, and good.

“Double contingency” was also discussed, one reads, by Talcott Parsons. This does not come up often because at some point the discipline of Sociology just decided to bury Parsons.

Double contingency comes up in interesting ways in European legal scholarship about technology. Luhmann, a dense German writer, is not read much in the United States, despite his being essentially right about things. Hildebrandt (2019) uses double contingency in her perhaps perplexingly framed argument for the “incomputability” of human personhood. Teubner (2006) makes a somewhat different but related argument about agency, double contingency, and electronic agents.

Hildebrandt and Teubner make for an interesting contrast. Hildebrandt is interested in the sanctity of humanity qua humanity, and in particular of privacy defined as the freedom to be unpredictable. This is an interesting inversion for European phenomenological philosophy. Recall that originally in European phenomenology human dignity was tied to autonomy, but autonomy depended on universalized rationality, with the implication that the most important thing about human dignity was that one followed universal moral rules (Kant). Hildebrandt is almost staking out an opposite position: that Arendtian natality, the unpredictableness of being an original being at birth, is the source of one’s dignity. Paradoxically, Hildebrandt argues that it humanity has this natality essentially and so claims that predictive technology might truly know the data subject are hubris, but also that the use of these predictive technologies is threat to natality unless their use is limited by data protection laws that ensure contestability of automated decisions.

Teubner (2006) takes a somewhat broader and, in my view, more self-consistent view. Grounding his argument firmly in Luhmann and Latour, Teubner is interested in the grounds of legally recognized (as opposed to ontologically, philosophically sanctified) personhood. And, he finds, the conditions of personhood can apply to many things besides humans! “Black box, double contingency, and addressability”, three fictions on which the idea of personhood depend, can apply to corporations and electronic agents as well as humans individually. This provides a kind of consistency and rationale for why we allow these kinds of entities to engage in legal contracts with each other. The contract, it is theorized, is a way of managing uncertainty, reducing the amount of contingency in the inherent “double contingency”-laden relationship.

Something of the old Kantian position comes through in Teubner, in that contracts and the law are regulatory. However, Teubner, like Nissenbaum, is ultimately a pluralist. Teubner writes about multiple “ecologies” in which the subject is engaged, and to which they are accountable in different modalities. So, the person, qua economic agent, is addressed in terms of their preferences. But the person, qua legal institutions, is addressed in terms of their embodiment of norms. The “whole person” does not appear in any singular ecology.

I’m sympathetic with the Teubnerian view here, perhaps in contrast with Hildebrandt’s view, the the following sense: while there may indeed be some intrinsic indeterminacy to an individual, this indeterminacy is meaningless unless it is also situated in (some) social ecology. However, what makes a person contingent visa vie one ecology is precisely that only a fragment of them is available to that ecology. The contingency to the first ecology is a consequence of their simultaneous presence within other ecologies. The person is autonomous, and hence also unpredictable, because of this multiplied, fragmented identity. Teubner, I think correctly, concludes that there is a limited form of personhood to non-human agents, but as these agents will be even more fragmented than humans, they are only persons in an attenuated sense.

I’d argue that Teubner helpfully backfills how personhood is socially constructed and accomplished, as opposed to guaranteed from birth, in a way that complements Hildebrandt nicely. In the 2019 article cited here, Hildebrandt argues for contestability of automated decisions as a means of preserving privacy. Teubner’s theory suggests that personhood–as participant in double contingency, as a black box–is threatened rather by context collapse, or the subverting of the various distinct social ecologies into a single platform in which data is shared ubiquitously between services. This provides a normative a universalist defense of keeping contexts separate (which in a different article Hildebrandt connects to purpose binding in the GDPR) which is never quite accomplished in, for example, Nissenbaum’s contextual integrity.

References

Hildebrandt, Mireille. “Privacy as protection of the incomputable self: From agnostic to agonistic machine learning.” Theoretical Inquiries in Law 20.1 (2019): 83-121.

Teubner, Gunther. “Rights of non‐humans? Electronic agents and animals as new actors in politics and law.” Journal of Law and Society 33.4 (2006): 497-521.

System 2 hegemony and its discontents

Recent conversations have brought me back to the third rail of different modalities of knowledge and their implications for academic disciplines. God help me. The chain leading up to this is: a reminder of how frustrating it was trying to work with social scientists who methodologically reject the explanatory power of statistics, an intellectual encounter with a 20th century “complex systems” theorist who also didn’t seem to understand statistics, and the slow realization that’s been bubbling up for me over the years that I probably need to write an article or book about the phenomenology of probability, because I can’t find anything satisfying about it.

The hypothesis I am now entertaining is that probabilistic or statistical reasoning is the intellectual crux, disciplinarily. What we now call “STEM” is all happy to embrace statistics as its main mode of empirical verification. This includes the use of mathematical proof for “exact” or a priori verification of methods. Sometimes the use of statistics is delayed or implicit; there is qualitative research that is totally consistent with statistical methods. But the key to this whole approach is that the fields, in combination, are striving for consistency.

But not everybody is on board with statistics! Why is that?

One reason may be because statistics is difficult to learn and execute. Doing probabilistic reasoning correctly is at times counter-intuitive. That means that quite literally it can make your head hurt to think about it.

There is a lot of very famous empirical cognitive psychology that has explored this topic in depth. The heuristics and biases research program of Kahneman and Tversky was critical for showing that human behavior rarely accords with decision-theoretic models of mathematical, probabilistic rationality. An intuitive, “fast”, prereflective form of thinking, (“System 1”) is capable of making snap judgments but is prone to biases such as the availability heuristic and the representativeness heuristic.

A couple general comments can be made about System 1. (These are taken from Tetlock’s review of this material in Superforecasting). First, a hallmark of System 1 is that it takes whatever evidence it is working with as given; it never second-guesses it or questions its validity. Second, System 1 is fantastic at provided verbal rationalizations and justifications of anything that it encounters, even when these can be shown to be disconnected from reality. Many colorful studies of split brain cases, but also many other lab experiments, show the willingness people have to make of stories to explain anything, and their unwillingness to say, “this could be due to one of a hundred different reasons, or a mix of them, and so I don’t know.”

The cognitive psychologists will also describe a System 2 cognitive process that is more deliberate and reflective. Presumably, this is the system that is sometimes capable of statistical or otherwise logical reasons. And a big part of statistical reasoning is questioning the source of your evidence. A robust application of System 2 reasoning is capable of overcoming System 1’s biases. At the level of institutional knowledge creation, the statistical sciences are comprised mainly of formalized, shared results of System 2 reasoning.

Tetlock’s work, from Expert Political Judgment and on, is remarkable for showing that deference to one or the other cognitive system is to some extent a robust personality trait. Famously, those of the “hedgehog” cognitive style, who apply System 1 and a simplistic theory of the world to interpret everything they experience, are especially bad at predicting the outcomes of political events (what are certainly the results of ‘complex systems’), whereas the “fox” cognitive style, which is more cautious about considering evidence and coming to judgments, outperforms them. It seems that Tetlock’s analysis weighs in favor of System 2 as a way of navigating complex systems.

I would argue that there are academic disciplines, especially those grounded in Heideggerian phenomenology, that see the “dominance” of institutions (such as academic disciplines) that are based around accumulations of System 2 knowledge as a problem or threat.

This reaction has several different guises:

  • A simple rejection of cognitive psychology, which has exposed the System 1/System 2 distinction, as “behaviorism”. (This obscures the way cognitive psychology was a major break away from behaviorism in the 50’s.)
  • A call for more “authentic experience”, couched in language suggesting ownership or the true subject of one’s experience, contrasting this with the more alienated forms of knowing that rely on scientific consensus.
  • An appeal to originality: System 2 tends to converge; my System 1 methods can come up with an exciting new idea!
  • The interpretivist methodological mandate for anthropological sensitivity to “emic”, or directly “lived experience”, of research subjects. This mandate sometimes blurs several individually valid motivations, such as: when emic experience is the subject matter in its own right, but (crucially) with the caveat that the results are not generalizable; when emic sensitivity is identified via the researcher’s reflexivity as a condition for research access; or when the purpose of the work is to surface or represent otherwise underrepresented views.

There are ways to qualify or limit these kinds of methodologies or commitments that makes them entirely above reproach. However, under these limits, their conclusions are always fragile. According to the hegemonic logic of System 2 institutions, a consensus of those thoroughly considering the statistical evidence can always supercede the “lived experience” of some group or individual. This is, at the methodological level, simply the idea that while we may make theory-laden observations, when those theories are disproved, those observations are invalidated as being influenced by erronenous theory. Indeed, mainstream scientific institutions take as their duty this kind of procedural objectivity. There is no such thing as science unless a lot of people are often being proven wrong.

This provokes a great deal of grievance. “Who made scientists, an unrepresentative class of people and machines disconnected from authentic experience, the arbiter of the real? Who are they to tell me I am wrong, or my experiences invalid?” And this is where we start to find trouble.

Perhaps most troubling is how this plays out at the level of psychodynamic politics. To have one’s lived experiences rejected, especially those lived experiences of trauma, and especially when those experiences are rejected wrongly, is deeply disturbing. One of the more mighty political tendencies of recent years has been the idea that whole classes of people are systematically subject to this treatment. This is one reason, among others, for influential calls for recalibrating the weight given to the experiences of otherwise marginalized people. This is what Furedi calls the therapeutic ethos of the Left. This is slightly different from, though often conflated with, the idea that recalibration is necessary to allow in more relevant data that was being otherwise excluded from consideration. This latter consideration comes up in a more managerialist discussion of creating technology that satisfies diverse stakeholders (…customers) through “participatory” design methods. The ambiguity of the term “bias”–does it mean a statistical error, or does it mean any tendency of an inferential system at all?–is sometimes leveraged to accomplish this conflation.

It is in practice very difficult to disentangle the different psychological motivations here. This is partly because they are deeply personal and mixed even at the level of the individual. (Highlighting this is why I have framed this in terms of the cognitive science literature). It is also partly because these issues are highly political as well. Being proven right, or wrong, has material consequences–sometimes. I’d argue: perhaps not as often as it should. But sometimes. And so there’s always a political interest, especially among those disinclined towards System 2 thinking, in maintaining a right to be wrong.

So it is hypothesized (perhaps going back to Lyotard) that at an institutional level there’s a persistent heterodox movement that rejects the ideal of communal intellectual integrity. Rather, it maintains that the field of authoritative knowledge must contain contradictions and disturbances of statistical scientific consensus. In Lyotard’s formulation, this heterodoxy seeks “legitimation by paralogy”, which suggests that its telos is at best a kind of creative intellectual emancipation from restrictive logics, generative of new ideas, but perhaps at worst a heterodoxy for its own sake.

This tendency has an uneasy relationship with the sociopolitical motive of a more integrated and representative society, which is often associated with the goal of social justice. If I understand these arguments directly, the idea is that, in practice, legitimized paralogy is a way of giving the underrepresented a platform. This has the benefits of increasing, visibly, representation. Here, paralogy is legitimized as a means of affirmative action, but not as a means improving system performance objectively.

This is a source of persistent difficulty and unease, as the paralogical tendency is never capable of truly emancipating itself, but rather, in its recuperated form, is always-already embedded in a hierarchy that it must deny to its initiates. Authenticity is subsumed, via agonism, to a procedural objectivity that proves it wrong.

Looking for references: phenomenology of probability

A number of lines of inquiry have all been pointing in the same direction for me. I now have a question and I’m on the lookout for scholarly references on it. I haven’t been able to find anything useful through my ordinary means.

I’m looking for a phenomenology of probability.

Hopefully the following paragraphs will make it clearer what I mean.

By phenomenology, I mean a systematic account (-ology) of lived experience (phenomen-). I’m looking for references especially in the “cone” of influences on Merleau-Ponty, and the “cone” of those influenced by Merleau-Ponty.

By probability, I mean the whole gestalt of uncertainty, expectation, and realization that is normally covered by the mathematical subject. The simplest example is the experience of tossing a coin. But there are countless others; this is a ubiquitous mode of phenomenon.

There is at least some indication that this phenomenon is difficult to provide a systematic account for. Probabilistic reasoning is not a very common skill. Perhaps the best account of this that I can think of is in Philip Tetlock’s Superforecasting, in which he reports that a large proportion of people are able to intuit only two kinds of uncertainty (“probably will happen” or “probably won’t happen”), another portion can reason in three (“probably will”, “probably won’t”, and “I don’t know”). For some people, asking for graded expectations (“I think there’s a 30% chance it will happen”) is more or less meaningless.

Nevertheless, all the major quantitative institutions–finance, telecom, digital services, insurance, the hard sciences, etc.–thrive on probabilistic calculations. Perhaps there’s a concentration here.

The other consideration leading towards the question of phenomenology of probability is the question of the interpretation of mathematical probability theory. As is well known, the same mathematics can be interpreted in multiple ways. There is an ‘objective’, frequentist interpretation, according to which probability is the frequency of events in the world. But with the rise of machine learning ‘subjectivist’ or Bayesian interpretations became much more popular. Bayesian probability is a calculus of rational subjective expectations, and transformation of those expectations, according to new evidence.

So far in my studies and research, I’ve never encountered a synthesis of Merleau-Pontean phenomenology with the subjectivist intepretation of probability. This is somewhat troubling.

Is there a treatment of this anywhere?

Instrumental realism — a few key points

Continuing my reading of Ihde (1991), I’m getting to the meat of his argument where he compares and constrasts his instrumental realist position with two contemporaries: Heelan (1989), whom Ihde points out is a double doctorate in physics and philosophy and so might be especially capable of philosophizing about physics praxis, and Hacking (1983), who is from my perspective the most famous of the three.

Ihde argues that he, Hacking, and Heelan are all more or less instrumental realists, but that Ihde and Heelan draw more from the phenomenological tradition, which emphasizes embodied perception and action, whereas Hacking is more in the Anglo-American ‘analytic’ tradition of starting from analysis of language. Ihde’s broader argument in the book is one of convergence: he uses the fact that many different schools of thought have arrived at similar conclusions to support the idea that those conclusions are true. That makes perfect sense to me.

Broadly speaking, instrumental realism is a position that unites philosophy of science with philosophy of technology to argue that:

  • That science is able to grasp, understand, theorize the real
  • That this reality is based on embodied perception and praxis. Or, in the more analytic framing, on observation and experiment.
  • That scientific perception and praxis is able to go “beyond” normal, every-day perception and praxis because of its use of scientific instruments, of which the microscope is a canonical example.
  • This position counters many simple relativistic threats to scientific objectivity and integrity, but does so by placing emphasis on scientific tooling. Science advances, mainly, by means of the technologies and infrastructures that it employs.
  • This position is explicitly embodied and materialist, counter to many claims that scientific realism depends on its being disembodied or transcendental.

This is all very promising though there are nuances to work out. Ihde’s study of his contemporaries is telling.

Ihde paints Heelan as a compelling thinker on this topic, though a bit blinkered by his emphasis on physics as the true or first science. Heelean’s view of scientific perception is that it is always both perception and measurement. Being what Ihde calls a “Euro-American” (which I think is quite funny), Ihde can describe him as therefore saying that scientific observation is both a matter of perception-praxis and a matter of hermeneutics–by which I mean the studying of a text in community with others or, to use the more Foucauldean term, “discourse”. Measurement, somewhat implicitly here is a kind of standardized way of “reading”. Ihde makes a big deal out of the subtle differences between “seeing” and “reading”.

To the extent that “discourse”, “hermeneutics”, “reading”, etc. imply a weakness of the scientific standpoint, they weigh against the ‘realism’ of instrumental realism. However, the term measurement is telling in that the difference between, say, different units of measurement of length, mass, time, etc. does not challenge the veracity of the claim “there are 24 hours in a day” because translating between different units is trivial.

Ihde characterizes Hacking as a fellow traveler, converging on instrumental realism when he breaks from his own analytic tradition to point out that experiment is one of the most important features of science, and that experiment depends on and is advanced by instrumentation. Ihde writes that Hacking is quite concerned about “(a) how an instrument is made, particularly with respect to theory-driven design, and (b) the physical processes entailed in the “how” or conditions of use.” Which makes perfect sense to me–that’s exactly what you’d want to scrutinize if you’d taking the ‘realism’ in instrumental realism seriously.

Ihde’s positions here, as the positions of his contemporaries, seem perfectly reasonable to me. I’m quite happy to adopt this view; it corresponds to conclusions I’ve reached in my own reading and practice and it’s nice to have a solid reference and term for it.

The questions that come up next are how instrumental realism applies to today’s controversies about science and technology. Just a handful of notes here:

  • I work quite a bit with scientific sofware. It’s quite clear to me that scientific software development is a major field of scientific instrumentation today. Scientists “see” and “do” via computers and software controls. This has made “data science” a core aspect of 21st century science in general, as it’s the part of science that is closest to the instrumentation. This confirms my long-held view that scientific software communities are the groups to study if you’re trying to understand sociology of science today.
  • On the other hand, it’s becoming increasingly clear in scientific practice that you can’t do software-driven science without the Internet and digital services, and these are now controlled by an oligopoly of digital services conglomerates. The hardware infrastructure–data centers, caching services, telecom broadly speaking, cloud computing hubs–goes far beyond the scientific libraries. Scientific instrumentation depends critically now on mass corporate IT.
  • These issues are compounded by how Internet infrastructure–now privately owned and controlled for all intents and purposes–is also the instrument of so much social science research. Don’t get me started on social media platforms as research tools. For me, the best resource on this is Tufekci, 2014.
  • The most hot-button, politically charged critique in the philosophy of science space is that science and/or data science and/or AI as it is currently constituted is biased because of who is represented in these research communities. The position being contested is the idea that AI/data science/computational social science etc. is objective because it is designed in a way that aligns with mathematical theory.
    • I would be very interested to read something connecting postcolonial, critical race, and feminist AI/data science practices to instrumental realism directly. I think these groups ought to be able to speak to each other easily, since the instrumental realists from the start are interested in the situated embodiment of the observer.
    • On the other hand, I think it would be difficult for the critical scholars to find fault in the “hard core” of data science/computing/AI technologies/instruments because, truly, they are designed according to mathematical theory that is totally general. This is what I think people mean when they say AI is objective because it’s “just math”. AI/data science praxis makes you sensitive to what aspects of the tooling are part of the core (libraries of algorithms, based on vetted mathematical theorems) and what are more incidental (training data sets, for example, or particular parameterizations of the general algorithms). If critical scholars focused on these parts of the scientific “stack”, and didn’t make sweeping comments that sound like they implicate the “core”, which we have every reason to believe is quite solid, they would probably get less resistance.
    • On the other hand, if science is both a matter of perception-praxis and hermeneutics, then maybe the representational concerns are best left on the hermeneutic side of the equation.

References

Hacking, I. (1983). Representing and Intervening: Introductory Topics in the Philosophy of Natural Science.

Heelan, P. A. (1989). Space-perception and the philosophy of science. Univ of California Press.

Ihde, D. (1991). Instrumental realism: The interface between philosophy of science and philosophy of technology (Vol. 626). Indiana University Press.

Tufekci, Z. (2014, May). Big questions for social media big data: Representativeness, validity and other methodological pitfalls. In Eighth International AAAI Conference on Weblogs and Social Media.

Considering Agre: More Merleau-Ponty, less Heidegger, in technology design, please

I’ve had some wonderful interlocutors lately. One (in private, and therefore anonymously) has recommended Don Ihde’s postphenomenology of science. I’ve been reading and enjoying Ihde’s Instrumental Realism (1991) and finding it very fruitful. Ihde is influential in some contemporary European theories of the interaction between law and technology. Tapan Parikh has (on Twitter) asked me why I haven’t been engaging more with Agre (e.g., 1997). I’ve been reminded by him and others of work in “critical HCI”, a field I encountered a lot in graduate school, which has its roots in, perhaps, Suchman (1987).

I don’t like and have never liked critical HCI and have resented its pretensions of being “more ethical” than other fields of technological design and practice for many years. I state this as a psychological fact, not as an objective judgment of the field. This morning I’m taking a moment to meditate on why I feel this way, and what that means for my work.

Agre (1997) has some telling anecdotes about being an AI researcher at MIT and becoming disillusioned upon encountering phenomenology and ethnomethodological work. His problem began with a search for originality.

My college did not require me to take many humanities courses, or learn to write in a professional register, and so I arrived in graduate school at MIT with little genuine knowledge beyond math and computers. …

My lack of a liberal education, it turns out, was only half of my problem. Only much later did I understand the other half, which I attribute to the historical constitution of AI as a field. A graduate student is responsible for finding a thesis topic, and this means doing something new. Yet I spent much of my first year, and indeed the next couple of years after my time away, trying very hard in vain to do anything original. Every topic I investigated seemed driven by its own powerful internal logic into a small number of technical solutions, each of which had already been investigated in the literature. …

Often when I describe my dislike for e.g. Latour, people assume that I’m on a similar educational path to Agre’s: that I am a “technical person”, perhaps with a “mathematical mind”, that I’ve never encountered any material that would challenge what has now solidified as the STEM paradigm.

That’s a stereotype that does not apply to me. For better or for worse, I had a liberal arts undergraduate education with exposure to technical subjects, social sciences, and the humanities. My graduate school education was similarly interdisciplinary.

There are people today who are advocates of critical HCI and design practices in the tradition of Suchman, Agre, and so on that have a healthy exposure to STEM education. There are also many who do not and employ this material as a kind of rear guard action to treat any less “critical” work as intrinsically tainted with the same hubris that the AI field did in, say, the 80’s. This is ahistorical and deeply frustrating. These conversations tend to end when the “critical” scholar insists on the phenomenological frame–arguing either implicitly or explicitly that (post-)positivism is unethical in and of itself.

It’s worth tracing the roots of this line of reasoning. Often, variations of it are deployed rhetorically in service of the cause of bringing greater representation of marginalized people into the field of technical design. It’s somewhat ironic that, as Duguid (2012) helpfully points out, this field of “critical” technology studies, drawing variously on Suchman, Dreyfus, Agre, and ultimately Latour and Woolgar, is ultimately Heidegger. Heidegger’s affiliation with Nazism is well-known, boring, and in no way a direct refutation of the progressive deployments for critical design.

But back to Agre, who goes on to discuss his conversion to phenomenology. Agre’s essay is largely an account of his rejection of the project of technical creation as a goal.

… I was unable to turn to other, nontechnical fields for inspiration. … The problem was not exactly that I could not understand the vocabulary, but that I insisted on trying to read everything as a narration of the workings of a mechanism. By that time much philosophy and psychology had adopted intellectual styles similar to that of AI, and so it was possible to read much that was congenial — except that it reproduced the same technical schemata as the AI literature. …

… I was also continually noticing the many small transformations that my daily life underwent as a result of noticing these things. As my intuitive understanding of the workings of everyday life evolved, I would formulate new concepts and intermediate on them, whereupon the resulting spontaneous observations would push my understanding of everyday life even further away from the concepts that I had been taught. … It is hard to convey the powerful effect that this experience had upon me; my dissertation (Agre 1988), once I finally wrote it, was motivated largely by a passion to explain to my fellow AI people how our AI concepts had cut us off from an authentic experience of our own lives. I still believe this.

Agre here is connecting the hegemony of cognitive psychology and AI in whenever he is writing about to his realization that “authentic experience” had been “cut off”. This is so Heideggerean. Agre is basically telling us that he independently came to Heidegger’s conclusions because of his focus on “everyday life”.

This binary between “everyday life” or “lived experience” on the one hand and the practice of AI design is repeated often by critical scholars today. Critical scholars with no practical experience in contemporary data science often assume that the AI of the 80’s is the same as machine learning practice today. This is an unsupported assumption directly contradicted by the lived experience of those who work in technical fields. Unfortunately, the success of the Heideggerean binary allows those whose lived experience is “not technical” to claim that their experience has a kind of epistemic or ethical priority, due to its “authenticity”, over more technical experience.

This is devastating for the discourse around now ubiquitous and politically vital topics around the politics of technology. If people have to choose between either doing technical work or doing critical Heideggerean reflection on that work, then by definition all technical work is uncritical and therefore lacking in the je ne se quoi that gives it “ethical” allure. In my view, this binary is counterproductive. If “criticality” never actually meets technical practice, then it can never be a way to address problems caused by poor technical design. Rather, it can only be a form of institutional sublimation of problematic technical practices. The critical field is sustained by, parasitic on, bad technical design: if the technology were better, then the critical field would not be able to feed so successful on the many frustrations and anxieties of those that encounter it.

Agre ultimately gives up on AI to go critical full time.

… My purpose here, though, is to describe how this experience led me into full-blown dissidence within the field of AI. … In order to find words for my newfound intuitions, I began studying several nontechnical fields. Most importantly, I sought out those people who claimed to be able to explain what is wrong with AI, including Hubert Dreyfus and Lucy Suchman. They, in turn, got me started reading Heidegger’s Being and Time (1961 [1927]) and Garfinkel’s Studies in Ethnomethodology (1984 [1967]). At first I found these texts impenetrable, not only because of their irreducible difficulty but also because I was still tacitly attempting to read everything as a specification for a technical mechanism. That was the only protocol of reading that I knew, and it was hard even to conceptualize the possibility of alternatives. (Many technical people have observed that phenomenological texts, when read as specifications for technical mechanisms, sound like mysticism. This is because Western mysticism, since the great spiritual forgetting of the later Renaissance, is precisely a variety of mechanism that posits impossible mechanisms.) My first intellectual breakthrough came when, for reasons I do not recall, it finally occurred to me to stop translating these strange disciplinary languages into technical schemata, and instead simply to learn them on their own terms.

What’s quite frustrating for somebody who is approaching this problem from a slightly broader liberal arts background than Agre did is that he is writing about encounters with only one of several different phenomenological traditions–the Heideggerean one–that have made it so successfully into American academic HCI.

This is where Don Ihde’s work is great: he is explicitly engaged in a much wider swathe of the Continental cannon. In doing so, he goes to the root of phenomenology, Husserl, and, I believe most significantly, Merleau-Ponty.

Merleau-Ponty’s Phenomenology of Perception is the kind of serious, monumental work that nobody in the U.S. bothers to read because it is difficult for them to think about. When humanities education is a form of consumerism, it’s much more fun to read, I don’t know, Haraway. But as a theoretical work that combines the phenomenological tradition with empirical psychology in a way that is absolutely and always about embodiment–all the particularities of being a body and what that means for our experiences of the world–you can’t beat him.

Because Merleau-Ponty is engaged mainly with perception and praxis, rather than hermeneutics (the preoccupation of Heidegger), he is able to come up with a much more muscular account of lived experience with machines without having to dress it up in terminology about ‘cyborgs’. This excerpt, from Ihde, is illustrative:

The blind man’s tool has ceased to be an object for him, and is no longer perceived for itself; its point has become an area of sensitivity, extending the scope and active radius of touch, and providing a parallel to sight. In th exploration of things, the length of the stick does not enter expressly as a middle term: The blind man is rather aware of it through the position of objects than the position of objects through it.

In my view, it’s Merleau-Ponty’s influence that most sets up Ihde to present a productive view of instrumental realism in science, based on the role of instruments in the perception and praxis of science. This is what we should be building on when we discuss the “philosophy of data science” and other software-driven research.

Dreyfus’s (1976) famous critique of AI drew a lot on Merleau-Ponty. Dreyfus is not brought up very much in the critical literature any more because (a) many of his critiques were internalized by the AI community and led to new developments that don’t fall prey to the same criticisms, (b) people are building all kinds of embodied robots now, and (c) the “Strong AI” program, of building AI that is so much like a human mind, has not been what’s been driving AI recently: industrial applications that scale far beyond the human mind are.

So it may be that Merleau-Ponty is not used as a phenomenological basis for studying AI and technology now because it is both successfully about lived experience but it does not imply that the literature of some more purely hermeneutic field of inquiry is separately able to underwrite the risks of technical practice. If instruments are an extension of the body, then that implies that the one who uses those instruments in responsible for them. That would imply that, for example, Zuckerberg is not an uncritical technologist who has built an autonomous system that is poorly designed because of the blind spots of engineering practice, but rather than he is the responsible actor leading the assemblage that is Facebook as an extension of himself.

Meanwhile, technical practice (I repeat myself) has changed. Agre laments that “[f]ormal reason has an unforgiving binary quality — one gap in the logic and the whole thing collapses — but this phenomenological language was more a matter of degree”. Indeed, when AI was developing along the lines of “formal reason” in the sense of axiomatic logic, this constraint would be frustrating. But in the decades since Agre was working, AI practice has become much more a “matter of degree”: it is highly statistical and probabilistic, depending on very broadly conceived spaces of representation that tune themselves based on many minute data points. Given the differences between “good old fashioned AI” based on logical representation and contemporary machine learning, it’s just bewildering when people raise these old critiques as if they are still meaningful and relevant to today’s practice. And yet the themes resurface again and again in the pitch battles of interdisciplinary warfare. The Heideggereans continue to renounce mathematics, formalism, technology, etc. as a practice in itself in favor of vague humanism. There’s a new articulation of these agenda every year, under different political guises.

Telling is how Agre, who began the journey trying to understand how to make a contribution to a technical field, winds up convincing himself that there are a lot of great academic papers to be written with no technical originality or relevance.

When I tried to explain these intuitions to other AI people, though, I quickly discovered that it is useless to speak nontechnical languages to people who are trying to translate these languages into specifications for technical mechanisms. This problem puzzled me for years, and I surely caused much bad will as I tried to force Heideggerian philosophy down the throats of people who did not want to hear it. Their stance was: if your alternative is so good then you will use it to write programs that solve problems better than anybody else’s, and then everybody will believe you. Even though I believe that building things is an important way of learning about the world, nonetheless I knew that this stance was wrong, even if I did not understand how.

I now believe that it is wrong for several reasons. One reason is simply that AI, like any other field, ought to have a space for critical reflection on its methods and concepts. Critical analysis of others’ work, if done responsibly, provides the field with a way to deepen its means of evaluating its research. It also legitimizes moral and ethical discussion and encourages connections with methods and concepts from other fields. Even if the value of critical reflection is proven only in its contribution to improved technical systems, many valuable criticisms will go unpublished if all research papers are required to present new working systems as their final result.

This point is echoed almost ten years later by another importer of ethnomethodological methods into technical academia, Dourish (2006). Today, there are academic footholds for critical work about technology, and some people write a lot of papers about it. More power to them, I guess. There is a now a rarified field of humanities scholarship in this tradition.

But when social relations truly are mediate by technology in myriad ways, it is perhaps not wrong to pursue lines of work that have more practical relevance. Doing this requires, in my view, a commitment to mathematical rigor and getting ones hands “dirty” with the technology itself, when appropriate. I’m quite glad that there are venues to pursue these lines now. I am somewhat disappointed and annoyed that I have to share these spaces with Heideggereans, who I just don’t see as adding much beyond the recycling of outdated tropes.

I’d be very excited to read more works that engage with Merleau-Ponty and work that builds on him.

References

Agre, P. E. (1997). Lessons learned in trying to reform AI. Social science, technical systems, and cooperative work: Beyond the Great Divide (1997)131. (link)

Dourish, P. (2006, April). Implications for design. In Proceedings of the SIGCHI conference on Human Factors in computing systems (pp. 541-550).

Duguid, P. (2012). On Rereading Suchman and Situated Action. Le Libellio d’AEGIS8(2), 3-11.

Dreyfus, H. (1976). What computers can’t do.

Ihde, D. (1991). Instrumental realism: The interface between philosophy of science and philosophy of technology (Vol. 626). Indiana University Press.

Suchman, L. A. (1987). Plans and situated actions: The problem of human-machine communication. Cambridge university press.

Winograd, T. & Flores, F. (1986). Understanding computers and cognition: A new foundation for design. Intellect Books.

A brief revisit of the Habermas/Luhmann debate

I’ve gotten into some arguments with friends recently about the philosophy of science. I’m also finding myself working these days, yet again, at a disciplinary problem. By which I mean, the primary difficult of the research questions and methodologies I’m asking at the moment is that there is no discipline that in its primary self-understanding asks those questions or uses those methodologies.

This and the coronavirus emergency have got me thinking, “What ever happened to the Habermas/Luhmann debate?” It is a good time to consider this problem because it’s one that’s likely to minimize my interactions with other people at a time when this one’s civic duty.

I refer to Rasch (1991) for an account of it. Here is a good paragraph summarizing some of the substance of the debate.

It is perhaps in this way that Luhmann can best be distinguished from Habermas. The whole movement of Habermas’s thought tends to some final resting place, prescriptively in the form of consensus as the legitimate basis for social order, and methodologically in the form of a normative underlying simple structure which is said to dictate the proper shape of surface complexity. But for Luhmann, complexity does not register the limits of human knowledge as if those limits could be overcome or compensated for by the reconstruction of some universal rule-making process. Rather, complexity, defined as the paradoxical task of solving a solved problem that cannot be solved, or only provisionally solved, or only solved by creating new problems, is the necessary ingredient for human intellectual endeavors. Complexity always remains complex and serves as a self-replenishing reservoir of possibilities (1981, 203-4). Simply put, complexity is limited understanding. It is the missing information which makes it impossible to comprehend a system fully (1985, 50-51; 1990, 81), but the absence of that information is absolutely unavoidable and paradoxically essential for the further evolution of complexity.

Rasch, 1991

In other words, Habermas believes that it’s possible, in principle, to reach a consensus around social order that is self-legitimizing and has at its core a simple, even empty, observer’s stance. This is accomplished through rational communicative action. Luhmann, on the other hand, sees the fun-house of perspectivalist warped mirrors and no such fixed point or epistemological attractor state.

But there’s another side to this debate which is not discussed so much in the same context. Habermas, by positing a communicative rationality capable of legitimization, is able to identify the obstacles to it: the “steering media”, money and power (Habermas, 1987). Whereas Luhmann understands a “social system” to be constituted by the communication within it. A social system is defined as the sum total of its speech, writing, and so on.

This has political implications. Rasch concludes:

With that in mind, one final paradox needs to be mentioned. Although Habermas is the self-identified leftist and social critic, and although Habermas sees in Luhmann and in systems theory a form of functionalist conservatism, it may very well be to Luhmann that future radical theorists will have to turn. Social and political theorists who are socially and politically committed need not continue to take theoretical concern with complexity as a sign of apathy, resignation, or conformism.’9 As Harlan Wilson notes, the “invocation of ‘complexity’ for the purpose of devaluing general political and social theory and of creating suspicion of all varieties of general political theory in contemporary political studies is to be resisted.” It is true that the increased consciousness of complexity brings along with it the realization that “total comprehension” and “absence of distortion” are unattainable, but, Wilson continues, “when that has been admitted, it remains that only general theoretical reflection, together with a sense of history, enables us to think through the meaning of our complex social world in a systematic way” (1975, 331). The only caveat is that such “thinking through” will have to be done on the level of complexity itself and will have to recognize that theories of social complexity are part of the social complexity they investigate. It is in this way that the ability to respond to social complexity in a complex manner will continue to evolve along with the social complexity that theory tries to understand

Rasch, 1991

One reason that Habermas is able to make a left-wing critique, whereas Luhmann can correctly be accused of being a functionalist conservative, is that Habermas’s normative stance has an irrational materialist order (perhaps what is “right wing” today) as its counterpoint. Whereas Luhmann, in asserting that social systems exist only as functional stability, does not seem to have money, power, or ultimately the violence they depend on in his ontology. It is a conservative view not because his theory lacks normativity, but because his descriptive stance is, at the end of the day, incomplete. Luhmann has no way of reckoning with the ways infrastructural power (Mann, 2008) exerts a passive external force on social systems. In other words, social systems evolve, but in an environment created by the material consequences of prior social systems, which reveal themselves as distributions of capital. This is what it means to be in the Anthropocene.

During a infrastructural crisis, such as a global pandemic in which the violence of nature threatens objectified human labor and the material supply chains that depend on it, society, often in times of “peace” happy to defer to “cultural” experts whose responsibility is the maintenance of ideology, defers to experts in different experts: the epidemiologists, the operations research experts, the financial analysts. These are the occupational “social scientists” who have no need of the defensiveness of the historian, the sociologist, the anthropologist, or the political scientist. They are deployed, sometimes in the public interest, to act on their operationally valid scientific consensus. And precisely because the systems that concern them are invisible to the naked eye (microbes, social structure, probabilities) the uncompromising, atheoretical empiricism that has come to be the proud last stand of the social sciences cannot suffice. Here, theory–an accomplishment of rationality, its response to materialist power–must shine.

The question, as always, is not whether there can be progress based on a rational simplification, but to what extent and economy supports the institutions that create and sustain such a perspective, expertise, and enterprise.

References

Habermas, Jürgen. “The theory of communicative action, Volume 2: Lifeworld and system.” Polity, Cambridge (1987).

Mann, Michael. “Infrastructural power revisited.” Studies in comparative international development 43.3-4 (2008): 355.

Rasch, William. “Theories of complexity, complexities of theory: Habermas, Luhmann, and the study of social systems.” German Studies Review 14.1 (1991): 65-83.

The diverging philosophical roots of U.S. and E.U. privacy regimes

For those in the privacy scholarship community, there is an awkward truth that European data protection law is going to a different direction from U.S. Federal privacy law. A thorough realpolitical analysis of how the current U.S. regime regarding personal data has been constructed over time to advantage large technology companies can be found in Cohen’s Between Truth and Power (2019). There is, to be sure, a corresponding story to be told about EU data protection law.

Adjacent, somehow, to the operations of political power are the normative arguments leveraged both in the U.S. and in Europe for their respective regimes. Legal scholarship, however remote from actual policy change, remains as a form of moral inquiry. It is possible, still, that through professional training of lawyers and policy-makers, some form of ethical imperative can take root. Democratic interventions into the operations of power, while unlikely, are still in principle possible: but only if education stays true to principle and does not succumb to mere ideology.

This is not easy for educational institutions to accomplish. Higher education certainly is vulnerable to politics. A stark example of this was the purging of Marxist intellectuals from American academic institutions under McCarthyism. Intellectual diversity in the United States has suffered ever since. However, this was only possible because Marxism as a philosophical movement is extraneous to the legal structure of the United States. It was never embedded at a legal level in U.S. institutions.

There is a simply historical reason for this. The U.S. legal system was founded under a different set of philosophical principles; that philosophical lineage still impacts us today. The Founding Fathers were primarily influenced by John Locke. Locke rose to prominence in Britain when the Whigs, a new bourgeois class of Parliamentarian merchant leaders, rose to power, contesting the earlier monarchy. Locke’s political contributions were a treatise pointing out the absurdity of the Divine Right of Kings, the prevailing political ideology of the time, and a second treatise arguing for a natural right to property based on the appropriation of nature. This latter political philosophy was very well aligned with Britain’s new national project of colonialist expansion. With the founding of the United States, it was enshrined into the Constitution. The liberal system of rights that we enjoy in the U.S. are founded in the Lockean tradition.

Intellectual progress in Europe did not halt with Locke. Locke’s ideas were taken up by David Hume, whose introduced arguments that were so agitating that they famously woke Immanuel Kant, in Germany, from his “dogmatic slumber”, leading him to develop a new highly systematic system of morality and epistemology. Among the innovations in this work was the idea that human freedom is grounded in the dignity of being an autonomous person. The source of dignity is not based in a natural process such as the tilling of land. It is rather based in on transcendental facts about what it means to be human. The key to morality is treating people like ends, not means; in other words, not using people as tools to other aims, but as aims in themselves.

If this sound overly lofty to an American audience, it’s because this philosophical tradition has never taken hold in American education. In both the United Kingdom and Britain, Kantian philosophy has always been outside the mainstream. The tradition of Locke, through Hume, has continued on in what philosophers will call “analytic philosophy”. This philosophy has taken on the empiricist view that the only source of knowledge is individual experience. It has transformed over centuries but continues to orbit around the individual and their rights, grounded in pragmatic considerations, and learning normative rules using the case-by-case approach of Common Law.

From Kant, a different “continental philosophy” tradition produced Hegel, who produced Marx. We can trace from Kant’s original arguments about how morality is based on the transcendental dignity of the individual to the moralistic critique that Marx made against capitalism. Capitalism, Marx argued, impugns the dignity of labor because it treats it like a means, not an end. No such argument could take root in a Lockean system, because Lockean ethics has no such prescription against treating others instrumentally.

Germany lost its way at the start of the 20th century. But the post-war regime, funded by the Marshall plan, directed by U.S. constitutional scholars as well as repatriating German intellectuals, had the opportunity to rewrite their system of governance. They did so along Kantian lines: with statutory law, reflecting a priori rational inquiry, instead of empiricist Common Law. They were able to enshrine into their system the Kantian basis of ethics, with its focus on autonomy.

Many of the intellectuals influencing the creation of the new German state were “Marxist” in the loose sense that they were educated in the German continental intellectual tradition which, at that time, included Marx as one of its key figures. By the mid-20th century they had naturally surpassed this ideological view. However, as a consequence, the McCarthyist attack on Marxism had the effect of also purging some of the philosophical connection between German and U.S. legal education. Kantian notions of autonomy are still quite foreign to American jurisprudence. Legal arguments in the United States draw instead on a vast collection of other tools based on a much older and more piecemeal way of establishing rights. But are any of these tools up to the task of protecting human dignity?

The EU is very much influenced by Germany and the German legal system. The EU has the Kantian autonomy ethic at the heart of its conception of human rights. This philosophical commitment has recently expressed itself in the EU’s assertion of data protection law through the GDPR, whose transnational enforcement clauses have brought this centuries-old philosophical fight into contemporary legal debate in legal jurisdictions that predate the neo-Kantian legal innovations of Continental states.

The puzzle facing American legal scholars is this: while industrial advocates and representatives tend to disagree with the strength of the GDPR, arguing that it is unworkable and/or based on poorly defined principle, the data protections that it offer seem so far to be compelling to users, and the shifting expectations around privacy in part induced by it are having effects on democratic outcomes (such as the CCPA). American legal scholars now have to try to make sense of the GDPR’s rules and find a normative basis for them. How can these expansive ideas of data protection, which some have had the audacity to argue is a new right (Hildebrandt, 2015), be grafted onto the the Common Law, empiricist legal system in a way that gives it the legitimacy of being an authentically American project? Is there a way to explain data protection law that does not require the transcendental philosophical apparatus which, if adopted, would force the American mind to reconsider in a fundamental way the relationship between individuals and the collective, labor and capital, and other cornerstones of American ideology?

There may or may not be. Time will tell. My own view is that the corporate powers, which flourished under the Lockean judicial system because of the weaknesses in that philosophical model of the individual and her rights, will instinctively fight what is in fact a threatening conception of the person as autonomous by virtue of their transcendental similarity with other people. American corporate power will not bother to make a philosophical case at all; it will operate in the domain of realpolitic so well documented by Cohen. Even if this is so, it is notable that so much intellectual and economic energy is now being exerted in the friction around a poweful an idea.

References

Cohen, J. E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford University Press, USA.

Hildebrandt, M. (2015). Smart technologies and the end (s) of law: Novel entanglements of law and technology. Edward Elgar Publishing.

For a more ethical Silicon Valley, we need a wiser economics of data

Kara Swisher’s NYT op-ed about the dubious ethics of Silicon Valley and Nitasha Tiku’s WIRED article reviewing books with alternative (and perhaps more cynical than otherwise stated) stories about the rise of Silicon Valley has generated discussion and buzz among the tech commentariat.

One point of debate is whether the focus should be on “ethics” or on something more substantively defined, such as human rights. Another point is whether the emphasis should be on “ethics” or on something more substantively enforced, like laws which impose penalties between 1% and 4% of profits, referring of course to the GDPR.

While I’m sympathetic to the European approach (laws enforcing human rights with real teeth), I think there is something naive about it. We have not yet seen whether it’s ever really possible to comply with the GDPR could wind up being a kind of heavy tax on Big Tech companies operating in the EU, but one that doesn’t truly wind up changing how people’s data are used. In any case, the broad principles of European privacy are based on individual human dignity, and so they do not take into account the ways that corporations are social structures, i.e. sociotechnical organizations that transcend individual people. The European regulations address the problem of individual privacy while leaving mystified the question of why the current corporate organization of the world’s personal information is what it is. This sets up the fight over ‘technology ethics’ to be a political conflict between different kinds of actors whose positions are defined as much by their social habitus as by their intellectual reasons.

My own (unpopular!) view is that the solution to our problems of technology ethics are going to have to rely on a better adapted technology economics. We often forget today that economics was originally a branch of moral philosophy. Adam Smith wrote The Theory of Moral Sentiments (1759) before An Inquiry into the Nature and Causes of the Wealth of Nations (1776). Since then the main purpose of economics has been to intellectually grasp the major changes to society due to production, trade, markets, and so on in order to better steer policy and business strategy towards more fruitful equilibria. The discipline has a bad reputation among many “critical” scholars due to its role in supporting neoliberal ideology and policies, but it must be noted that this ideology and policy work is not entirely cynical; it was a successful centrist hegemony for some time. Now that it is under threat, partly due to the successes of the big tech companies that benefited under its regime, it’s worth considering what new lessons we have to learn to steer the economy in an improved direction.

The difference between an economic approach to the problems of the tech economy and either an ‘ethics’ or a ‘law’ based approach is that it inherently acknowledges that there are a wide variety of strategic actors co-creating social outcomes. Individual “ethics” will not be able to settle the outcomes of the economy because the outcomes depend on collective and uncoordinated actions. A fundamentally decent person may still do harm to others due to their own bounded rationality; “the road to hell is paved with good intentions”. Meanwhile, regulatory law is not the same as command; it is at best a way of setting the rules of a game that will be played, faithfully or not, by many others. Putting regulations in place without a good sense of how the game will play out differently because of them is just as irresponsible as implementing a sweeping business practice without thinking through the results, if not more so because the relationship between the state and citizens is coercive, not voluntary as the relationship between businesses and customers is.

Perhaps the biggest obstacle to shifting the debate about technology ethics to one about technology economics is that it requires a change in register. It drains the conversation of the pathos which is so instrumental in surfacing it as an important political topic. Sound analysis often ruins parties like this. Nevertheless, it must be done if we are to progress towards a more just solution to the crises technology gives us today.

Note on Austin’s “Cyber Policy in China”: on the emphasis on ‘ethics’

I’ve had recommended to me Greg Austin’s “Cyber Policy in China” (2014) as a good, recent work. I am not sure what I was expecting–something about facts and numbers, how companies are being regulated, etc. Just looking at the preface, it looks like this book is about something else.

The preface frames the book in the discourse, beginning in the 20th century, about the “information society”. It explicitly mentions the UN’s World Summit on the Information Society (WSIS) as a touchstone of international consensus about what the information society is, as society “where everyone can create, access, utilise and share information and knowledge’ to ‘achieve their full potential’ in ‘improving their quality of life’. It is ‘people-centered’.

In Chinese, the word for information society is xinxi shehui (Please forgive me: I’ve got little to know understanding of the Chinese language and that includes not knowing how to put the appropriate diacritics into transliterations of Chinese terms.) It is related to a term “informatization” (xinxihua) that is compared to industrialization. It means the historical process by which information technology is fully used, information resources are developed and utilized, the exchange of information and knowledge sharing are promoted, the quality of economic growth is improved, and the transformation of economic and social development is promoted”. Austin’s interesting point is that this is “less people-centered than the UN vision and more in the mould of the materialist and technocratic traditions that Chinese Communists have preferred.”

This is an interesting statement on the difference between policy articulations by the United Nations and the CCP. It does not come as a surprise.

What did come as a surprise is how Austin chooses to orient his book.

On the assumption that outcomes in the information society are ethically determined, the analytical framework used in the book revolves around ideal policy values for achieving an advanced information society. This framework is derived from a study of ethics. Thus, the analysis is not presented as a work of social science (be that political science, industry policy or strategic studies). It is more an effort to situate the values of China’s leaders within an ethical framework implied by their acceptance of the ambition to become and advanced information society.

This comes as a surprise to me because what I was expected from a book titled “Cyber Policy in China” is really something more like industry policy or strategic studies. I was not ready for, and am frankly a bit disappointed by, the idea that this is really a work of applied philosophy.

Why? I do love philosophy as a discipline and have studied it carefully for many years. I’ve written and published about ethics and technological design. But my conclusion after so much study is that “the assumption that outcomes in the information society are ethically determined” is totally incorrect. I have been situated for some time in discussions of “technology ethics” and my main conclusion from them is that (a) “ethics” in this space are more often than not an attempt to universalize what are more narrow political and economic interests, and that (b) “ethics” are constantly getting compromised by economic motivations as well as the mundane difficulty of getting information technology to work as it is intended to in a narrow, functionally defined way. The real world is much bigger and more complex than any particular ethical lens can take in. Attempt to define technological change in terms of “ethics” are almost always a political maneuver, for good or for ill, of some kind that is reducing the real complexity of technological development into a soundbite. A true ethical analysis of cyber policy would need to address industrial policy and strategic aspects, as this is what drives the “cyber” part of it.

The irony is that there is something terribly un-emic about this approach. By Austin’s own admission, the CCP cyber policy is motivated by material concerns about the distribution of technology and economic growth. Austin could have approached China’s cyber policy in the technocratic terms they see themselves in. But instead Austin’s approach is “human-centered”, with a focus on leaders and their values. I already doubt the research on anthropological grounds because of the distance between the researcher and the subjects.

So I’m not sure what to do about this book. The preface makes it sound like it belongs to a genre of scholarship that reads well, and maybe does important ideological translation work, but does provide something like scientific knowledge of China’s cyber policy, which is what I’m most interested in. Perhaps I should move on, or take other recommendations for reading on this topic.

some moral dilemmas

Here are some moral dilemmas:

  • A firm basis for morality is the Kantian categorical imperative: treat others as ends and not means, with the corollary that one should be able to take the principles of ones actions and extend them as laws binding all rational beings. Closely associated and important ideas are those concerned with human dignity and rights. However, the great moral issues of today are about social forms (issues around race, gender, etc.), sociotechnical organizations (issues around the role of technology), or a totalizing systemic issues (issues around climate change). Morality based on individualism and individual equivalence seem out of place when the main moral difficulties are about body agonism. What is the basis for morality for these kinds of social moral problems?
  • Theodicy has its answer: it’s bounded rationality. Ultimately what makes us different from other people, that which creates our multiplicity, is our distance from each other, in terms of available information. Our disconnection, based on the different loci and foci within complex reality, is precisely that which gives reality its complexity. Dealing with each other’s ignorance is the problem of being a social being. Ignorance is therefore the condition of society. Society is the condition of moral behavior; if there were only one person, there would be no such thing as right or wrong. Therefore, ignorance is a condition of morality. How, then, can morality be known?