Digifesto

Instrumental realism — a few key points

Continuing my reading of Ihde (1991), I’m getting to the meat of his argument where he compares and constrasts his instrumental realist position with two contemporaries: Heelan (1989), whom Ihde points out is a double doctorate in physics and philosophy and so might be especially capable of philosophizing about physics praxis, and Hacking (1983), who is from my perspective the most famous of the three.

Ihde argues that he, Hacking, and Heelan are all more or less instrumental realists, but that Ihde and Heelan draw more from the phenomenological tradition, which emphasizes embodied perception and action, whereas Hacking is more in the Anglo-American ‘analytic’ tradition of starting from analysis of language. Ihde’s broader argument in the book is one of convergence: he uses the fact that many different schools of thought have arrived at similar conclusions to support the idea that those conclusions are true. That makes perfect sense to me.

Broadly speaking, instrumental realism is a position that unites philosophy of science with philosophy of technology to argue that:

  • That science is able to grasp, understand, theorize the real
  • That this reality is based on embodied perception and praxis. Or, in the more analytic framing, on observation and experiment.
  • That scientific perception and praxis is able to go “beyond” normal, every-day perception and praxis because of its use of scientific instruments, of which the microscope is a canonical example.
  • This position counters many simple relativistic threats to scientific objectivity and integrity, but does so by placing emphasis on scientific tooling. Science advances, mainly, by means of the technologies and infrastructures that it employs.
  • This position is explicitly embodied and materialist, counter to many claims that scientific realism depends on its being disembodied or transcendental.

This is all very promising though there are nuances to work out. Ihde’s study of his contemporaries is telling.

Ihde paints Heelan as a compelling thinker on this topic, though a bit blinkered by his emphasis on physics as the true or first science. Heelean’s view of scientific perception is that it is always both perception and measurement. Being what Ihde calls a “Euro-American” (which I think is quite funny), Ihde can describe him as therefore saying that scientific observation is both a matter of perception-praxis and a matter of hermeneutics–by which I mean the studying of a text in community with others or, to use the more Foucauldean term, “discourse”. Measurement, somewhat implicitly here is a kind of standardized way of “reading”. Ihde makes a big deal out of the subtle differences between “seeing” and “reading”.

To the extent that “discourse”, “hermeneutics”, “reading”, etc. imply a weakness of the scientific standpoint, they weigh against the ‘realism’ of instrumental realism. However, the term measurement is telling in that the difference between, say, different units of measurement of length, mass, time, etc. does not challenge the veracity of the claim “there are 24 hours in a day” because translating between different units is trivial.

Ihde characterizes Hacking as a fellow traveler, converging on instrumental realism when he breaks from his own analytic tradition to point out that experiment is one of the most important features of science, and that experiment depends on and is advanced by instrumentation. Ihde writes that Hacking is quite concerned about “(a) how an instrument is made, particularly with respect to theory-driven design, and (b) the physical processes entailed in the “how” or conditions of use.” Which makes perfect sense to me–that’s exactly what you’d want to scrutinize if you’d taking the ‘realism’ in instrumental realism seriously.

Ihde’s positions here, as the positions of his contemporaries, seem perfectly reasonable to me. I’m quite happy to adopt this view; it corresponds to conclusions I’ve reached in my own reading and practice and it’s nice to have a solid reference and term for it.

The questions that come up next are how instrumental realism applies to today’s controversies about science and technology. Just a handful of notes here:

  • I work quite a bit with scientific sofware. It’s quite clear to me that scientific software development is a major field of scientific instrumentation today. Scientists “see” and “do” via computers and software controls. This has made “data science” a core aspect of 21st century science in general, as it’s the part of science that is closest to the instrumentation. This confirms my long-held view that scientific software communities are the groups to study if you’re trying to understand sociology of science today.
  • On the other hand, it’s becoming increasingly clear in scientific practice that you can’t do software-driven science without the Internet and digital services, and these are now controlled by an oligopoly of digital services conglomerates. The hardware infrastructure–data centers, caching services, telecom broadly speaking, cloud computing hubs–goes far beyond the scientific libraries. Scientific instrumentation depends critically now on mass corporate IT.
  • These issues are compounded by how Internet infrastructure–now privately owned and controlled for all intents and purposes–is also the instrument of so much social science research. Don’t get me started on social media platforms as research tools. For me, the best resource on this is Tufekci, 2014.
  • The most hot-button, politically charged critique in the philosophy of science space is that science and/or data science and/or AI as it is currently constituted is biased because of who is represented in these research communities. The position being contested is the idea that AI/data science/computational social science etc. is objective because it is designed in a way that aligns with mathematical theory.
    • I would be very interested to read something connecting postcolonial, critical race, and feminist AI/data science practices to instrumental realism directly. I think these groups ought to be able to speak to each other easily, since the instrumental realists from the start are interested in the situated embodiment of the observer.
    • On the other hand, I think it would be difficult for the critical scholars to find fault in the “hard core” of data science/computing/AI technologies/instruments because, truly, they are designed according to mathematical theory that is totally general. This is what I think people mean when they say AI is objective because it’s “just math”. AI/data science praxis makes you sensitive to what aspects of the tooling are part of the core (libraries of algorithms, based on vetted mathematical theorems) and what are more incidental (training data sets, for example, or particular parameterizations of the general algorithms). If critical scholars focused on these parts of the scientific “stack”, and didn’t make sweeping comments that sound like they implicate the “core”, which we have every reason to believe is quite solid, they would probably get less resistance.
    • On the other hand, if science is both a matter of perception-praxis and hermeneutics, then maybe the representational concerns are best left on the hermeneutic side of the equation.

References

Hacking, I. (1983). Representing and Intervening: Introductory Topics in the Philosophy of Natural Science.

Heelan, P. A. (1989). Space-perception and the philosophy of science. Univ of California Press.

Ihde, D. (1991). Instrumental realism: The interface between philosophy of science and philosophy of technology (Vol. 626). Indiana University Press.

Tufekci, Z. (2014, May). Big questions for social media big data: Representativeness, validity and other methodological pitfalls. In Eighth International AAAI Conference on Weblogs and Social Media.

Considering Agre: More Merleau-Ponty, less Heidegger, in technology design, please

I’ve had some wonderful interlocutors lately. One (in private, and therefore anonymously) has recommended Don Ihde’s postphenomenology of science. I’ve been reading and enjoying Ihde’s Instrumental Realism (1991) and finding it very fruitful. Ihde is influential in some contemporary European theories of the interaction between law and technology. Tapan Parikh has (on Twitter) asked me why I haven’t been engaging more with Agre (e.g., 1997). I’ve been reminded by him and others of work in “critical HCI”, a field I encountered a lot in graduate school, which has its roots in, perhaps, Suchman (1987).

I don’t like and have never liked critical HCI and have resented its pretensions of being “more ethical” than other fields of technological design and practice for many years. I state this as a psychological fact, not as an objective judgment of the field. This morning I’m taking a moment to meditate on why I feel this way, and what that means for my work.

Agre (1997) has some telling anecdotes about being an AI researcher at MIT and becoming disillusioned upon encountering phenomenology and ethnomethodological work. His problem began with a search for originality.

My college did not require me to take many humanities courses, or learn to write in a professional register, and so I arrived in graduate school at MIT with little genuine knowledge beyond math and computers. …

My lack of a liberal education, it turns out, was only half of my problem. Only much later did I understand the other half, which I attribute to the historical constitution of AI as a field. A graduate student is responsible for finding a thesis topic, and this means doing something new. Yet I spent much of my first year, and indeed the next couple of years after my time away, trying very hard in vain to do anything original. Every topic I investigated seemed driven by its own powerful internal logic into a small number of technical solutions, each of which had already been investigated in the literature. …

Often when I describe my dislike for e.g. Latour, people assume that I’m on a similar educational path to Agre’s: that I am a “technical person”, perhaps with a “mathematical mind”, that I’ve never encountered any material that would challenge what has now solidified as the STEM paradigm.

That’s a stereotype that does not apply to me. For better or for worse, I had a liberal arts undergraduate education with exposure to technical subjects, social sciences, and the humanities. My graduate school education was similarly interdisciplinary.

There are people today who are advocates of critical HCI and design practices in the tradition of Suchman, Agre, and so on that have a healthy exposure to STEM education. There are also many who do not and employ this material as a kind of rear guard action to treat any less “critical” work as intrinsically tainted with the same hubris that the AI field did in, say, the 80’s. This is ahistorical and deeply frustrating. These conversations tend to end when the “critical” scholar insists on the phenomenological frame–arguing either implicitly or explicitly that (post-)positivism is unethical in and of itself.

It’s worth tracing the roots of this line of reasoning. Often, variations of it are deployed rhetorically in service of the cause of bringing greater representation of marginalized people into the field of technical design. It’s somewhat ironic that, as Duguid (2012) helpfully points out, this field of “critical” technology studies, drawing variously on Suchman, Dreyfus, Agre, and ultimately Latour and Woolgar, is ultimately Heidegger. Heidegger’s affiliation with Nazism is well-known, boring, and in no way a direct refutation of the progressive deployments for critical design.

But back to Agre, who goes on to discuss his conversion to phenomenology. Agre’s essay is largely an account of his rejection of the project of technical creation as a goal.

… I was unable to turn to other, nontechnical fields for inspiration. … The problem was not exactly that I could not understand the vocabulary, but that I insisted on trying to read everything as a narration of the workings of a mechanism. By that time much philosophy and psychology had adopted intellectual styles similar to that of AI, and so it was possible to read much that was congenial — except that it reproduced the same technical schemata as the AI literature. …

… I was also continually noticing the many small transformations that my daily life underwent as a result of noticing these things. As my intuitive understanding of the workings of everyday life evolved, I would formulate new concepts and intermediate on them, whereupon the resulting spontaneous observations would push my understanding of everyday life even further away from the concepts that I had been taught. … It is hard to convey the powerful effect that this experience had upon me; my dissertation (Agre 1988), once I finally wrote it, was motivated largely by a passion to explain to my fellow AI people how our AI concepts had cut us off from an authentic experience of our own lives. I still believe this.

Agre here is connecting the hegemony of cognitive psychology and AI in whenever he is writing about to his realization that “authentic experience” had been “cut off”. This is so Heideggerean. Agre is basically telling us that he independently came to Heidegger’s conclusions because of his focus on “everyday life”.

This binary between “everyday life” or “lived experience” on the one hand and the practice of AI design is repeated often by critical scholars today. Critical scholars with no practical experience in contemporary data science often assume that the AI of the 80’s is the same as machine learning practice today. This is an unsupported assumption directly contradicted by the lived experience of those who work in technical fields. Unfortunately, the success of the Heideggerean binary allows those whose lived experience is “not technical” to claim that their experience has a kind of epistemic or ethical priority, due to its “authenticity”, over more technical experience.

This is devastating for the discourse around now ubiquitous and politically vital topics around the politics of technology. If people have to choose between either doing technical work or doing critical Heideggerean reflection on that work, then by definition all technical work is uncritical and therefore lacking in the je ne se quoi that gives it “ethical” allure. In my view, this binary is counterproductive. If “criticality” never actually meets technical practice, then it can never be a way to address problems caused by poor technical design. Rather, it can only be a form of institutional sublimation of problematic technical practices. The critical field is sustained by, parasitic on, bad technical design: if the technology were better, then the critical field would not be able to feed so successful on the many frustrations and anxieties of those that encounter it.

Agre ultimately gives up on AI to go critical full time.

… My purpose here, though, is to describe how this experience led me into full-blown dissidence within the field of AI. … In order to find words for my newfound intuitions, I began studying several nontechnical fields. Most importantly, I sought out those people who claimed to be able to explain what is wrong with AI, including Hubert Dreyfus and Lucy Suchman. They, in turn, got me started reading Heidegger’s Being and Time (1961 [1927]) and Garfinkel’s Studies in Ethnomethodology (1984 [1967]). At first I found these texts impenetrable, not only because of their irreducible difficulty but also because I was still tacitly attempting to read everything as a specification for a technical mechanism. That was the only protocol of reading that I knew, and it was hard even to conceptualize the possibility of alternatives. (Many technical people have observed that phenomenological texts, when read as specifications for technical mechanisms, sound like mysticism. This is because Western mysticism, since the great spiritual forgetting of the later Renaissance, is precisely a variety of mechanism that posits impossible mechanisms.) My first intellectual breakthrough came when, for reasons I do not recall, it finally occurred to me to stop translating these strange disciplinary languages into technical schemata, and instead simply to learn them on their own terms.

What’s quite frustrating for somebody who is approaching this problem from a slightly broader liberal arts background than Agre did is that he is writing about encounters with only one of several different phenomenological traditions–the Heideggerean one–that have made it so successfully into American academic HCI.

This is where Don Ihde’s work is great: he is explicitly engaged in a much wider swathe of the Continental cannon. In doing so, he goes to the root of phenomenology, Husserl, and, I believe most significantly, Merleau-Ponty.

Merleau-Ponty’s Phenomenology of Perception is the kind of serious, monumental work that nobody in the U.S. bothers to read because it is difficult for them to think about. When humanities education is a form of consumerism, it’s much more fun to read, I don’t know, Haraway. But as a theoretical work that combines the phenomenological tradition with empirical psychology in a way that is absolutely and always about embodiment–all the particularities of being a body and what that means for our experiences of the world–you can’t beat him.

Because Merleau-Ponty is engaged mainly with perception and praxis, rather than hermeneutics (the preoccupation of Heidegger), he is able to come up with a much more muscular account of lived experience with machines without having to dress it up in terminology about ‘cyborgs’. This excerpt, from Ihde, is illustrative:

The blind man’s tool has ceased to be an object for him, and is no longer perceived for itself; its point has become an area of sensitivity, extending the scope and active radius of touch, and providing a parallel to sight. In th exploration of things, the length of the stick does not enter expressly as a middle term: The blind man is rather aware of it through the position of objects than the position of objects through it.

In my view, it’s Merleau-Ponty’s influence that most sets up Ihde to present a productive view of instrumental realism in science, based on the role of instruments in the perception and praxis of science. This is what we should be building on when we discuss the “philosophy of data science” and other software-driven research.

Dreyfus’s (1976) famous critique of AI drew a lot on Merleau-Ponty. Dreyfus is not brought up very much in the critical literature any more because (a) many of his critiques were internalized by the AI community and led to new developments that don’t fall prey to the same criticisms, (b) people are building all kinds of embodied robots now, and (c) the “Strong AI” program, of building AI that is so much like a human mind, has not been what’s been driving AI recently: industrial applications that scale far beyond the human mind are.

So it may be that Merleau-Ponty is not used as a phenomenological basis for studying AI and technology now because it is both successfully about lived experience but it does not imply that the literature of some more purely hermeneutic field of inquiry is separately able to underwrite the risks of technical practice. If instruments are an extension of the body, then that implies that the one who uses those instruments in responsible for them. That would imply that, for example, Zuckerberg is not an uncritical technologist who has built an autonomous system that is poorly designed because of the blind spots of engineering practice, but rather than he is the responsible actor leading the assemblage that is Facebook as an extension of himself.

Meanwhile, technical practice (I repeat myself) has changed. Agre laments that “[f]ormal reason has an unforgiving binary quality — one gap in the logic and the whole thing collapses — but this phenomenological language was more a matter of degree”. Indeed, when AI was developing along the lines of “formal reason” in the sense of axiomatic logic, this constraint would be frustrating. But in the decades since Agre was working, AI practice has become much more a “matter of degree”: it is highly statistical and probabilistic, depending on very broadly conceived spaces of representation that tune themselves based on many minute data points. Given the differences between “good old fashioned AI” based on logical representation and contemporary machine learning, it’s just bewildering when people raise these old critiques as if they are still meaningful and relevant to today’s practice. And yet the themes resurface again and again in the pitch battles of interdisciplinary warfare. The Heideggereans continue to renounce mathematics, formalism, technology, etc. as a practice in itself in favor of vague humanism. There’s a new articulation of these agenda every year, under different political guises.

Telling is how Agre, who began the journey trying to understand how to make a contribution to a technical field, winds up convincing himself that there are a lot of great academic papers to be written with no technical originality or relevance.

When I tried to explain these intuitions to other AI people, though, I quickly discovered that it is useless to speak nontechnical languages to people who are trying to translate these languages into specifications for technical mechanisms. This problem puzzled me for years, and I surely caused much bad will as I tried to force Heideggerian philosophy down the throats of people who did not want to hear it. Their stance was: if your alternative is so good then you will use it to write programs that solve problems better than anybody else’s, and then everybody will believe you. Even though I believe that building things is an important way of learning about the world, nonetheless I knew that this stance was wrong, even if I did not understand how.

I now believe that it is wrong for several reasons. One reason is simply that AI, like any other field, ought to have a space for critical reflection on its methods and concepts. Critical analysis of others’ work, if done responsibly, provides the field with a way to deepen its means of evaluating its research. It also legitimizes moral and ethical discussion and encourages connections with methods and concepts from other fields. Even if the value of critical reflection is proven only in its contribution to improved technical systems, many valuable criticisms will go unpublished if all research papers are required to present new working systems as their final result.

This point is echoed almost ten years later by another importer of ethnomethodological methods into technical academia, Dourish (2006). Today, there are academic footholds for critical work about technology, and some people write a lot of papers about it. More power to them, I guess. There is a now a rarified field of humanities scholarship in this tradition.

But when social relations truly are mediate by technology in myriad ways, it is perhaps not wrong to pursue lines of work that have more practical relevance. Doing this requires, in my view, a commitment to mathematical rigor and getting ones hands “dirty” with the technology itself, when appropriate. I’m quite glad that there are venues to pursue these lines now. I am somewhat disappointed and annoyed that I have to share these spaces with Heideggereans, who I just don’t see as adding much beyond the recycling of outdated tropes.

I’d be very excited to read more works that engage with Merleau-Ponty and work that builds on him.

References

Agre, P. E. (1997). Lessons learned in trying to reform AI. Social science, technical systems, and cooperative work: Beyond the Great Divide (1997)131. (link)

Dourish, P. (2006, April). Implications for design. In Proceedings of the SIGCHI conference on Human Factors in computing systems (pp. 541-550).

Duguid, P. (2012). On Rereading Suchman and Situated Action. Le Libellio d’AEGIS8(2), 3-11.

Dreyfus, H. (1976). What computers can’t do.

Ihde, D. (1991). Instrumental realism: The interface between philosophy of science and philosophy of technology (Vol. 626). Indiana University Press.

Suchman, L. A. (1987). Plans and situated actions: The problem of human-machine communication. Cambridge university press.

Winograd, T. & Flores, F. (1986). Understanding computers and cognition: A new foundation for design. Intellect Books.

Is there hypertext law? Is there Python law?

I have been impressed with Hildbebrandt’s analysis of the way particular technologies provide the grounds for different forms of institutions. Looking into the work of Don Ihde, who I gather is a pivotal thinking in this line of reasoning, I find the ‘postphenomenological’ and ‘instrumental realist’ position very compelling. Lawrence Diver’s work on digisprudence, which follows in this vein, looks generative.

In my encounters with with work, I have also perceived there to be gaps and discrepancies in the texture of the argument. There is something uncanny about reading material that is, perceptually, almost correct. Either I am in error, or it is.

One key difference seems to be about the attitude towards mathematical or computational formalism. This is chiefly, I sense, truly an attitude, in the sense of emotional difference. Scholars in this area will speak, in personal communication, of being “wary” or “afraid”. It’s an embodied reaction which orients their rhetoric. It is shared with many other specifically legal scholars. In the gestalt of these arguments, the legal scholar will refer to philosophies of science and/or technology to justify a distance between lived reality, lifeworld, and artifice.

Taking a somewhat different perspective, there are other ways to consider the relationship between formalism, science, and fact, even when taking seriously the instrumental realist position. It is noteworthy, I believe, that this field of scholarship is so adamantly Latourian, and that Latour has succeeded in anathematizing Bourdieu. I now see more clearly how Science of Science and Reflexivity, which was both a refutation of Latour and a lament of how the capture of institutional power (such as nation-state provided research funding) is a distortion to the autonomous and legitimizing processes of science, are really all one argument. Latour, despite the wrongness of so much of his early work which is now so widely cited, became a powerful figure. The better argument will only win in time.

Bourdieu, it should be noted, is an instrumental realist about science, though he may not have been aware of Ihde and that line of discourse. He also saw the connection between formalism and instrumentation which seems to elude the postphenomenologist legal scholars. Formalism and instrumentation are both a form of practical “automation” which, if we take the instrumental realists seriously (and we should) wind up enabling the body, understood as perception-praxis, to see and know in different ways. Bourdieu, who obviously has read Foucault but improves on him, accepts the perception-praxis view of the body and socializes it through the concept of the habitus, which is key to his analysis of the sociology of science.

But I digress. What I have been working towards is the framing of the questions in the title. To recap, Hildebrandt, in my understanding, makes a compelling case for how the printing press, as a technology, has had specific affordances that have enabled the Rule of Law that is characteristic of constitutional democracy. This Rule of Law, or some descendent of it, remains dominant in Europe and perhaps this is why, via the Brussells Effect, the EU now stands as the protector of individuals from the encroaching power of machine-learning powered technologies, in the form of Information and Communication Infrastructure (ICI).

This is a fine narrative, though perhaps rather specifically motivated by a small number of high profile regulatory acts. I will not suggest that the narrative overplays anybody’s hand; it is useful as a schematic.

However, I am not sure the analysis is so solid. There seem to be some missing steps in the historical analysis. Which brings me to my first question, which is: what about hypertext? Hypertext is neither the text of the printing press, nor is it a form of machine learning. It is instrumentally dependent on scientific and technological formalism: the HyperText Markup Language (HTML) and the HyperText Transfer Protocol are both formal standards, built instrumentally on a foundation of computation and networking theory and technology. And as a matter of contemporary perception and praxis, it is probably the primary way in which people engage in analysis of law and communication about the law today.

So, what about it? Doesn’t this example show a contradiction at the heart of this instrumental realist legal scholarship?

The follow-up question is about another class of digital “languages”: software source code. Python, for example. These, even more than HyperText, are formalism, with semantics guaranteed by a compiler. But these semantics are in a sense legislated via the Python Enhancement Proposal process, and of course any particular Python application or software practice may be designed and mandated through a wide array of institutional mechanisms before being deployed to users.

I would look forward to work on these subjects coming from Hildebrandt’s CUHOBICOL research group, but for the fact that these technologies (which may bely the ideology motivating the project!) are excluded by the very system of categories the project invokes to classify different kinds of regulatory systems. According to the project web site (written, like all web sites, in HyperText), there are three (only three?) kinds of normativity: text-driven normativity, based in the printing press; data-based normativity, the normativity of feedback once based in cybernetic engineering and now based in machine learning; and code-based normativity. The last category is defined in terms of code’s immutability, which is rather alien to anybody who writes software code and has to deal with how it changes all the time. Moreover, the project’s aim is to explore code-based normativity through blockchain applications. I understand that gesturing at blockchain technology is a nice way to spice up a funding proposal. But by seeing normativity in these terms, many intermediate technologies, and therefore a broad technical design space of normative technology, are excluded from analysis.

On descent-based discrimination (a reply to Hanna et al. 2020)

In what is likely to be a precedent-setting case, California regulators filed a suit in the federal court on June 30 against Cisco Systems Inc, alleging that the company failed to prevent discrimination, harassment and retaliation against a Dalit engineer, anonymised as “John Doe” in the filing.

The Cisco case bears the burden of making anti-Dalit prejudice legible to American civil rights law as an extreme form of social disability attached to those formerly classified as “Untouchable.” Herein lies its key legal significance. The suit implicitly compares two systems of descent-based discrimination – caste and race – and translates between them to find points of convergence or family resemblance.

A. Rao, link

There is not much I can add to this article about caste-based discrimination in the U.S. In the law suit, a team of high caste South Asians in California is alleged to have discriminated against a Dalit engineer coworker. The work of the law suit is to make caste-based discrimination legible to American civil rights law. It, correctly, in my view, draws the connection to race.

This illustrative example prompts me to respond to Hanna et al.’s 2020 “Towards a critical race methodology in algorithmic fairness.” This paper by a Google team included a serious, thoughtful consideration of the argument I put forward with my co-author Bruce Haynes in “Racial categories in machine learning”. I like the Hanna et al. paper, think it makes interesting and valid points about the multidimensionality of race, and am grateful for their attention to my work.

I also disagree with some of their characterization of our argument and one of the positions they take. For some time I’ve intended to write a response. Now is a fine time.

First, a quibble: Hanna et al. describe Bruce D. Haynes as a “critical race scholar” and while he may have changed his mind since our writing, at the time he was adamant (in conversation) that he is not a critical race scholar, but that “critical race studies” refers to a specific intellectual project of racial critique that just happens to be really trendy on Twitter. There are lots and lots of other ways to study race critically that are not “critical race studies”. I believe this point was important to Bruce as a matter of scholarly identity. I also feel that it’s an important point because, frankly, I don’t find a lot of “critical race studies” scholarship persuasive and I probably wouldn’t have collaborated as happily with somebody of that persuasion.

So that fact that Hanna et al. explicitly position their analysis in “critical race” methods is a signpost that they are actually trying to accomplish a much more specifically disciplinarily informed project than we were. Sadly, they did not get into the question of how “critical race methodology” differs from other methodologies one might use to study race. That’s too bad, as it supports what I feel is a stifling hegemony that particular discourse has over discussions of race and technology.

The Google team is supportive of the most important contribution of our paper–that racial categories are problematic and that this needs to be addressed in the fairness in AI literature. They then go on to argue against out proposed solution of “using an unsupervised machine learning method to create race-like categories which aim to address “historical racial segregation with reproducing the political construction of racial categories.”” (their rendering). I will defend our solution here.

Their first claim:

First, it would be a grave error to supplant the existing categories of race with race-like categories inferred by unsupervised learning methods. Despite the risk of reifying the socially constructed idea called race, race does exist in the world, as a way of mental sorting, as a discourse which is adopted, as a social thing which has both structural and ideological components. In other words, although race is social constructed, race still has power. To supplant race with race-like categories for the purposes of measurement sidesteps the problem.

This paragraph does feel very “critical race studies” to me, in that it makes totalizing claims about the work race does in society in a way that precludes the possibility of any concrete or focused intervention. I think they misunderstand our proposal in the following ways:

  • We are not proposing that, at a societal and institutional level, we institute a new, stable system of categories derived from patterns of segregation. We are proposing that, ideally, temporary quasi-racial categories are derived dynamically from data about segregation in a way that destabilizes the social mechanisms that reproduce racial hierarchy, reducing the power of those categories.
  • This is proposed as an intervention to be adopted by specific technical systems, not at the level of hegemonic political discourse. It is a way of formulating an anti-racist racial project by undermining the way categories are maintained.
  • Indeed, the idea is to sidestep the problem, in the sense that it is an elegant way to reduce the harm that the problem does. Sidestepping is, imagine it, a way of avoiding a danger. In this case, that danger is the reification of race in large scale digital platforms (for example).

Next, they argue:

Second, supplanting race with race-like categories depends highly on context, namely how race operates within particular systems of inequality and domination. Benthall and Haynes restrict their analysis to that of spatial segregation, which is to be sure, an important and active research area and subject of significant policy discussion (e.g. [76, 99]). However, that metric may appear illegible to analyses pertaining to other racialized institutions, such as the criminal justice system, education, or employment (although one can readily see their connections and interdependencies). The way that race matters or pertains to particular types of structural inequality depends on that context and requires its own modes of operationalization

Here, the Google team takes the anthropological turn and, like many before them, suggests that a general technical proposal is insufficient because it is not sufficiently contextualized. Besides echoing the general problem of the ineffectualness of anthropological methods in technology ethics, they also mischaracterize our paper by saying we restrict our analysis to spatial segregation. This is not true: in the paper we generalize our analysis to social segregation, as in on a social network graph. Naturally, we would be (a) interested in and open to other systems of identifying race as a feature of social structure, and (b) would want to tailor data over which any operationalization technique was applied, where appropriate, to technical and functional context. At the same time, we are on quite solid ground in saying that racial is structural and systemic, and in a sense defined at a holistic societal level as much as it has ramifications in, and is impacted by, the micro- and contextual level as well. As we are approaching the problem from a structural sociological one, we can imagine a structural technical solution. This is an advantage of the method over a more anthropological one.

Third:

At the same time we focus on the ontological aspects of race (what is race, how is it constituted and imagined in the world), it is necessary to pay attention to what we do with race and measures which may be interpreted as race. The creation of metrics and indicators which are race-like will still be interpreted as race.

This is a strange criticism given that one of the potential problems with our paper is that the quasi-racial categories we propose are not interpretable. The authors seem think that our solution involves the institution of new quasi-racial categories at the level of representation or discourse. That’s not what we’ve proposed. We’ve proposed a design for a machine learning system which, we’d hope, would be understood well enough by its engineers to work as an intervention. Indeed, the correlation of the quasi-racial categories with socially recognized racial ones is important if they are to ground fairness interventions; the purpose of our proposed solution is narrowly to allow for these interventions without the reification of the categories.

Enough defense. There is a point the Google team insists on which strikes me as somewhat odd and to me signals a further weakness of their hyper contextualized method: its inability to generalize beyond the hermeneutic cycles of “critical race theory”.

Hanna et al. list several (seven) different “dimensions of race” based on different ways race can be ascribed, inferred, or expressed. There is, here, the anthropological concern with the individual body and its multifaceted presentations in the complex social field. But they explicitly reject one of the most fundamental ways in which race operates at a transpersonal and structural level, which is through families and genealogy. This is well-intentioned but ultimately misguided.

Note that we have excluded “racial ancestry” from this table. Genetics, biomedical researchers, and sociologists of science have criticized the use of “race” to describe genetic ancestry within biomedical research [40, 49, 84, 122], while others have criticized the use of direct-to-consumer genetic testing and its implications for racial and ethnic identification [15, 91, 113]

In our paper, we take pains to point out responsibly how many aspects of racial, such as phenotype, nationality (through citizenship rules), and class signifiers (through inheritance) are connected with ancestry. We, of course, do not mean to equate ancestry with race. Nor, especially, are we saying that there are genetic racialized qualities besides perhaps those associated with phenotype. We are also not saying that direct-to-consumer genetic test data is what institutions should be basing their inference of quasi-racial categories on. Nothing like that.

However, speaking for myself, I believe that an important aspect of how race functions at a social structural level is how it implicates relations of ancestry. A. Rao perhaps puts the point better: race is a system of inherited privilege, and racial discrimination is more often than not discrimination based on descent.

Understanding this about race allows us to see what race has in common with other systems of categorical inequality, such as the caste system. And here was a large part of the point of offering an algorithmic solution: to suggest a system for identifying inequality that transcends the logic of what is currently recognized within the discourse of “critical race theory” and anticipates forms of inequality and discrimination that have not yet been so politically recognized. This will become increasingly an issue when a pluralistic society (or user base of an on-line platform) interacts with populations whose categorical inequalities have different histories and origins besides the U.S. racial system. Though our paper used African-Americans as a referent group, the scope of our proposal was intentionally much broader.

References

Benthall, S., & Haynes, B. D. (2019, January). Racial categories in machine learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 289-298).

Hanna, A., Denton, E., Smart, A., & Smith-Loud, J. (2020, January). Towards a critical race methodology in algorithmic fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 501-512).

Notes about “Data Science and the Decline of Liberal Law and Ethics”

Jake Goldenfein and I have put up on SSRN our paper, “Data Science and the Decline of Liberal Law and Ethics”. I’ve mentioned it on this blog before as something I’m excited about. It’s also been several months since we’ve finalized it, and I wanted to quickly jot some notes about it based on considerations going into it and since then.

The paper was the result of a long and engaged collaboration with Jake which started from a somewhat different place. We considered the question, “What is sociopolitical emancipation in the paradigm of control?” That was a mouthful, but it captured what we were going for:

  • Like a lot of people today, we are interested in the political project of freedom. Not just freedom in narrow, libertarian senses that have proven to be self-defeating, but in broader senses of removing social barriers and systems of oppression. We were ambivalent about the form that would take, but figured it was a positive project almost anybody would be on board with. We called this project emancipation.
  • Unlike a certain prominent brand of critique, we did not begin from an anthropological rejection of the realism of foundational mathematical theory from STEM and its application to human behavior. In this paper, we did not make the common move of suggesting that the source of our ethical problems is one that can be solved by insisting on the terminology or methodological assumptions of some other discipline. Rather, we took advances in, e.g., AI as real scientific accomplishments that are telling us how the world works. We called this scientific view of the world the paradigm of control, due to its roots in cybernetics.

I believe our work is making a significant contribution to the “ethics of data science” debate because it is quite rare to encounter work that is engaged with both project. It’s common to see STEM work with no serious moral commitments or valence. And it’s common to see the delegation of what we would call emancipatory work to anthropological and humanistic disciplines: the STS folks, the media studies people, even critical X (race, gender, etc.) studies. I’ve discussed the limitations of this approach, however well-intentioned, elsewhere. Often, these disciplines argue that the “unethical” aspect of STEM is because of their methods, discourses, etc. To analyze things in terms of their technical and economic properties is to lose the essence of ethics, which is aligned with anthropological methods that are grounded in respectful, phenomenological engagement with their subjects.

This division of labor between STEM and anthropology has, in my view (I won’t speak for Jake) made it impossible to discuss ethical problems that fit uneasily in either field. We tried to get at these. The ethical problem is instrumentality run amok because of the runaway economic incentives of private firms combined with their expanded cognitive powers as firms, a la Herbert Simon.

This is not a terribly original point and we hope it is not, ultimately, a fringe political position either. If Martin Wolf can write for the Financial Times that there is something threatening to democracy about “the shift towards the maximisation of shareholder value as the sole goal of companies and the associated tendency to reward management by reference to the price of stocks,” so can we, and without fear that we will be targeted in the next red scare.

So what we are trying to add is this: there is a cognitivist explanation for why firms can become so enormously powerful relative to individual “natural persons”, one that is entirely consistent with the STEM foundations that have become dominant in places like, most notably, UC Berkeley (for example) as “data science”. And, we want to point out, the consequences of that knowledge, which we take to be scientific, runs counter to the liberal paradigm of law and ethics. This paradigm, grounded in individual autonomy and privacy, is largely the paradigm animating anthropological ethics! So we are, a bit obliquely, explaining why the the data science ethics discourse has gelled in the ways that it has.

We are not satisfied with the current state of ‘data science ethics’ because to the extent that they cling to liberalism, we fear that they miss and even obscure the point, which can best be understood in a different paradigm.

We left as unfinished the hard work of figuring out what the new, alternative ethical paradigm that took cognitivism, statistics, and so on seriously would look like. There are many reasons beyond the conference publication page limit why we were unable to complete the project. The first of these is that, as I’ve been saying, it’s terribly hard to convince anybody that this is a project worth working on in the first place. Why? My view of this may be too cynical, but my explanations are that either (a) this is an interdisciplinary third rail because it upsets the balance of power between different academic departments, or (b) this is an ideological third rail because it successfully identifies a contradiction in the current sociotechnical order in a way that no individual is incentivized to recognize, because that order incentivizes individuals to disperse criticism of its core institutional logic of corporate agency, or (c) it is so hard for any individual to conceive of corporate cognition because of how it exceeds the capacity of human understanding that speaking in this way sounds utterly speculative to a lot of fo people. The problem is that it requires attributing cognitive and adaptive powers to social forms, and a successful science of social forms is, at best, in the somewhat gnostic domain of complex systems research.

The latter are rarely engaged in technology policy but I think it’s the frontier.

References

Benthall, Sebastian and Goldenfein, Jake, Data Science and the Decline of Liberal Law and Ethics (June 22, 2020). Ethics of Data Science Conference – Sydney 2020 (forthcoming). Available at SSRN: https://ssrn.com/abstract=

from morality to economics: some stuff about Marx for Tapan Parikh

I work on a toolkit for heterogeneous agent structural modeling in Economics, Econ-ARK. In this capacity, I work with the project’s creators, who are economists Chris Carroll and Matt White. I think this project has a lot of promise and am each day more excited about its potential.

I am also often in academic circles where it’s considered normal to just insult the entire project of economics out of hand. I hear some empty, shallow snarking economists about once every two weeks. I find this kind of professional politics boring and distracting. It’d also often ignorant. I wanted to connect a few dots to try to remedy the situation, while also noting some substantive points that I think fill out some historical context.

Tracking back to this discussion of morality in the Western philosophical tradition and what challenges it today, the focal character there was Immanuel Kant, who for the sake of argument espoused a model of morality based on universal properties of a moral agent.

Tapan Parikh has argued (in personal communications) that I am “a dumb ass” for using Kant in this way, because Kant is on the record for writing some very racist things. I feel I have to address this point. No, I’m not going to stop working with the ideas from the Western philosophical canon just because so many of them were racist. I’m not a cancel culturist in any sense. I agree with Dave Chappelle on the subject of Louis C.K., for example.

However, it is actually essential to know whether or not racism is a substantive, logical problem with Kant’s philosophy. I’ll defer to others on this point. A quick Googling of the topic seems to indicate that either: Kant was inconsistent, and was a racist while also espousing universalist morality, and that tells us more about Kant the person than it does about universalist morality–the universalist morality transcending Kant’s human failings in this case (Allais, 2016) or Kant actually became less racist during the period in which he was most philosophically productive, which was late in his life (Kleingeld, 2007). I like this latter story better: Kant, being an 18th century German, was racist as hell; then he thought about it a bit harder, developed a universalist moral system, and because, as a consequence, less racist. That seems to be a positive endorsement of what we now call Kantian morality, which is a product of that later period and not the earlier virulently racist period.

Having hopefully settled that question, or at least smoothed it over sufficiently to move on, we can build in more context. Everybody knows this sequence:

Kant -> Hegel -> Marx

Kant starts a transcendent dialectic as a universalist moral project. Hegel historicizes that dialectic, in the process taking into serious consideration the Haitian rebellion, which inspires his account of the Master/Slave dialectic, which is quite literally about slavery and how it is undone by its internal contradictions. The problem, to make a long story short, is that the Master winds up being psychologically dependent on the Slave, and this gives the Slave power over the Master. The Slave’s rebellion is successful, as has happened in history many times. This line of thinking results in, if my notes are right (they might not be) Hegel’s endorsement of something that looks vaguely like a Republic as the end-of-history.

He dies in 1831, and Marx picks up this thread, but famously thinks the historical dialectic is material, not ideal. The Master/Slave dialectic is transposed onto the relationship between Capital and the Proletariat. Capital exploits the Proletariat, but needs the Proletariat. This is what enables the Proletariat to rebel. Once the Proletariat rebel, says Marx, everybody will be on the same level and there will be world peace. I.e., communism is the material manifestation of a universalist morality. This is what Marx inherits from Kant.

But wait, you say. Kant and Hegel were both German Idealists. Where did Marx get this materialist innovation? It was probably his own genius head, you say.

Wrong! Because there’s a thread missing here.

Recall that it was David Hume, a Scotsman, whose provocative skeptical ideas roused Kant from his “dogmatic slumber”. (Historical question: Was it Hume who made Kant “woke” in his old age?) Hume was in the line of Anglophone empiricism, which was getting very bourgey after the Whigs and Locke and all that. Buddies with Hume is Adam Smith who was, let’s not forget, a moral philosopher.

So while Kant is getting very transcendental, Smith is realizing that in order to do any serious moral work you have to start looking at material reality, and so he starts Economics in England.

This next part I didn’t really realize the significance of until digging into it. Smith dies in 1790, just around when Kant is completing the moral project he’s famous for. At that time, the next major figure is 18, coming of age. It’s David Ricardo: a Sephardic Jew turned Unitarian, a Whig, a businessman who makes a fortune speculating on the Battle of Waterloo, who winds up buying a seat in Parliament because you could do that then, and also winds up doing a lot of the best foundational work on economics including inventing the labor theory of value. He was also, incidentally, an abolitionist.

Which means that to complete one’s understanding of Marx, you have to also be thinking:

Hume -> Smith -> Ricardo -> Marx

In other words, Marx is the unlikely marriage of German Idealism, with its continued commitment to universalist ethics, with British empiricism which is–and I keep having to bring this up–weak on ethics. Empiricism is a bad way of building an ethical theory and it’s why the U.S. has bad privacy laws. But it’s a good way to build up an economic materialist view of history. Hence all of Marx’s time looking at factories.

It’s worth noting that Ricardo was also the one who came up with the idea of Land Value Taxation (LVT), which later Henry George popularized as the Single Tax in the late 19th/early 20th century. So Ricardo really is the pivotal figure here in a lot of ways.

In future posts, I hope to be working out more of the background of economics and its connection to moral philosophy. In addition to trying to make the connections to my work on Econ-ARK, there’s also resonances coming up in the policy space. For example, the Law and Political Economy community has been rather explicitly trying to bring back “political economy”–in the sense of Smith, Ricardo, and Marx–into legal scholarship, with a particular aim at regulating the Internet. These threads are braiding together.

References

Allais, L. (2016). Kant’s racism. Philosophical papers45(1-2), 1-36.

Kleingeld, P. (2007). Kant’s second thoughts on race. The Philosophical Quarterly57(229), 573-592.

A philosophical puzzle: morality with complex rationality

There’s a recurring philosophical puzzle that keeps coming up as one drills into the foundational issues at the heart of technology policy. The more complete articulation of it that I know of is in a draft I’ve written with Jake Goldenfein whose publication was COVID delayed. But here is an abbreviated version of the philosophical problem, distilled perhaps from the tech policy context.

For some reason it all comes back to Kant. The categorical imperative has two versions that are supposed to imply each other:

  • Follow rules that would be agreed on as universal by rational beings.
  • Treat others as ends and not means.

This is elegant and worked quite well while the definitions of ‘rationality’ in play were simple enough that Man could stand at the top of the hierarchy.

Kant is outdated now of course but we can see the influence of this theory in Rawls’s account of liberal ethics (the ‘veil of ignorance’ being a proxy for the reasoning being who has transcended their empirical body), in Habermas’s account of democracy (communicative rationality involving the setting aside of individual interests), and so on. Social contract theories are more or less along these lines. This paradigm is still more or less the gold standard.

There’s a few serious challenges to this moral paradigm. They both relate to how the original model of rationality that it is based on is perhaps naive or so rarefied to be unrealistic. What happens if you deny that people are rational in any disinterested sense, or allow for different levels of rationality? It all breaks down.

On the one hand, there’s various forms of egoism. Sloterdijk argues that Nietzsche stood out partly because he argued for an ethics of self-advancement, which rejected deontological duty. Scandalous. The contemporary equivalent is the reputation of Ayn Rand and those inspired by her. The general idea here is the rejection of social contract. This is frustrating to those who see the social contract as serious and valuable. A key feature of this view is that reason is not, as it is for Kant, disinterested. Rather, it is self-interested. It’s instrumental reason with attendant Humean passions to steer it. The passions need not be too intellectually refined. Romanticism, blah blah.

On the other hand, the 20th century discovers scientifically the idea of bounded rationality. Herbert Simon is the pivotal figure here. Individuals, being bounded, form organizations to transcend their limits. Simon is the grand theorist of managerialism. As far as I know, Simon’s theories are amoral, strictly about the execution of instrumental reason.

Nevertheless, Simon poses a challenge to the universalist paradigm because he reveals the inadequacy of individual humans to self-determine anything of significance. It’s humbling; it also threatens the anthropocentrism that provided the grounds for humanity’s mutual self-respect.

So where does one go from here?

It’s a tough question. Some spitballing:

  • One option is to relocate the philosophical subject from the armchair (Kant) to the public sphere (Habermas) into a new kind of institution that was better equipped to support their cogitation about norms. A public sphere equipped with Bloomberg terminals? But then who provides the terminals? And what about actually existing disparities of access?
    • One implication of this option, following Habermas, is that the communications within it, which would have to include data collection and the application of machine learning, would be disciplined in ways that would prevent defections.
    • Another implication, which is the most difficult one, is that the institution that supports this kind of reasoning would have to acknowledge different roles. These roles would constitute each other relationally–there would need to be a division of labor. But those roles would need to each be able to legitimize their participation on the whole and trust the overall process. This seems most difficult to theorize let alone execute.
  • A different option, sort of the unfinished Nietzschean project, is to develop the individual’s choice to defect into something more magnanimous. Simone de Beauvoir’s widely underrated Ethics of Ambiguity is perhaps the best accomplishment along these lines. The individual, once they overcome their own solipsism and consider their true self-interests at an existential level, come to understand how the success of their projects depends on society because society will outlive them. In a way, this point echoes Simon’s in that it begins from an acknowledgment of human finitude. It reasons from there to a theory of how finite human projects can become infinite (achieving the goal of immortality to the one who initiates them) by being sufficiently prosocial.

Either of these approaches might be superior to “liberalism”, which arguably is stuck in the first paradigm (though I suppose there are many liberal theorists who would defend their position). As a thought experiment, I wonder what public policies motivated by either of these positions would look like.

Considering the Endless Frontier Act

As a scientist/research engineer, I am pretty excited about the Endless Frontier Act. Nothing would make my life easier than a big new pile of government money for basic research and technological prototypes awarded to people with PhDs. I’m absolutely all for it and applaud the bipartisan coalition moving it forward.

I am somewhat concerned, however, that the motivation for it is the U.S.’s fear of technological inferiority with respect to China. I’ll take the statement of Dr. Reif, President of MIT, at face value, which is probably foolish given the political acumen and moral flexibility of academic administrators. But look at this:

The COVID-19 pandemic is intensifying U.S. concerns about China’s technological strength. Unfortunately, much of the resulting policy debate has centered on ways to limit China’s capacities — when what we need most is a systematic approach to strengthening our own.

Very straightforward. This is what it’s about. Ok. I get it. You have to sell it to the Trump administration. It’s a slam dunk. But then why write this:

The aim of the new directorate is to support fundamental scientific research — with specific goals in mind. This is not about solving incremental technical problems. As one example, in artificial intelligence, the focus would not be on further refining current algorithms, but rather on developing profoundly new approaches that would enable machines to “learn” using much smaller data sets — a fundamental advance that would eliminate the need to access immense data sets, an area where China holds an immense advantage. Success in this work would have a double benefit: seeding economic benefits for the U.S. while reducing the pressure to weaken privacy and civil liberties in pursuit of more “training” data.

This sounds totally dubious to me. There are well known mathematical theorems addressing why learning without data is impossible. The troublesome fact nodded to is that is because of the political economy of China, it is possible to collect “immense data sets”–specifically about people–without civil liberties issues getting in the way. This presumes that the civil liberties problem with AI is the collection of data from data subjects, not the use of machine learning on those data subjects. But even if you could magically learn about data subjects without collecting data from them, you wouldn’t bypass the civil liberties concerns. Rather, you would have a nightmare world where even sans data collection you could act with godly foresight in one’s interventions on polity. This is a weird fantasy and I’m pretty sure the only reason it’s written this way is to sell the idea superficially to uncritical readers trying to reconcile the various narratives around U.S., technology, and foreign policy which are incoherent.

What it’s really all about, of course, is neoliberalism. Dr. Reif is not shy about this:

The bill would also encourage universities to experiment with new ways to help accelerate the process of bringing innovative ideas to the marketplace, either via established companies or startups. At MIT we started The Engine, an independent entity that provides private-sector funding, work space and technical assistance to start-ups that are developing technologies with enormous potential but that require more extensive technical development than typical VCs will fund, from fusion energy to a fast, inexpensive test for COVID-19. Other models may suit other institutions — but the nation needs to encourage many more such efforts, across the country, to reap the full benefits of our federal investment in science.

The implication here is that unless the results of federal investment in the sciences can be privatized, the country does not “reap the full benefits” of the federal investment. This makes the whole idea of a massively expanded federal government program make a lot more sense, politically, because it’s a massive redistribution of funds to, ultimately, Big Tech, who can buy up any successful ‘startups’ without any downside investment risk. And Big Tech now runs the country and has found a way to equate its global market share with national security such that these things are now indistinguishable in any statement of U.S. policy.

This would all be fine I guess if not for the fact that science is different from technology in that science is, cannot be, a private endeavor. The only way science works is if you have an open vetting process that is constantly arguing with itself and forcing the scientists to reproduce results. This global competition for scientific prestige through the conference and journal systems is what “keeps it honest”, which is precisely what allows it to be credible. (Bourdieu, Science of Science, 2004)

A U.S. strategy since basically the end of World War II has been to lead the scientific field, get first mover advantage on any discoveries, and reap the benefit of being the center of education for global scientific talent through foreign tuition fees and talented immigrants. Then it wields technology transfer as a magic wand for development.

Now this is backfiring a bit because Chinese science students are returning to China to be entrepreneurial there and also work for the government. The U.S. is discovering that science, being an open system, allows others countries to free ride and this is perhaps bothersome to it. The current administration seems to hate the idea of anybody free-riding off of something the U.S. is doing, though in the past those spillover effects (another name for them!) would have been the basis of U.S. leadership. You can’t really have it both ways.

So the renaming of the NSF to the NSTF–with “technology” next to “science”–is concerning because “technology” investment need not be openly vetted. Rather, given the emphasis on go-to-market strategy, it suggests that the scientific norms of reproducibility will be secondary to privatization through intellectual property laws, including trade secrecy. The could be quite bad, because without a disinterested community of people vetting the results, what you’ll probably get is a lot of industrially pre-captured bullshit.

Let’s acknowledge for a minute that the success of most startups little to do with the quality of the technology made and much to do with path dependency in network growth, marketing, and regulatory arbitrage. If the government starts a VC fund run by engineers with no upside then that money goes into a bunch of startups which then compete for creative destruction of each other until one, large enough based on its cannibalizing of the others, gets consumed by by FAANG company. It will, in other words, look like Silicon Valley today, which is not terribly efficient at discovery because success is measured by the market. I.e., because (as Dr. Reif suggests) the return on investment is realized as capital accumulation.

This is all pretty backwards if what you’re trying to do is maintain scientific superiority. Scientific progress requires a functional economy of symbolic capital among scientists operating with intellectual integrity that is “for its own sake”, not operating at the behest of market conquest. The spillover effects and freeriding in science is a feature, not a bug, and it’s difficult to reconcile it with a foreign policy that is paranoid about technology transfer to its competitors. Indeed, this is one reason why scientists are often aligned with humanitarian causes, world peace, etc.

Science is a good social structure with a lot going for it. I hope the new bill pours more money into it without messing it up too much.

Managerialism and Habermas

Managerialism is an “in” topic recently in privacy scholarship (Cohen, 2019; Waldman, 2019). In Waldman’s (2019) formulation, the managerialism problem is, roughly: privacy regulations are written with a certain substantive intent, but the for-profit firms that are the object of these regulations interpret them either as a bothersome constraint on otherwise profitable activity, or else as means to the ends of profitability, efficiency, and so on themselves. In other words, the substance of the regulations are subjugated to the substance of the goals of corporate management. Managerialism.

This is exactly what anybody who has worked in a corporate tech environment would expect. The scholarly accomplishment of presenting these bare facts to a legal academic audience is significant because employees of these corporations are most often locked up by strict NDAs. So while the point is obvious, I mean that in the positive sense that it should be taken as an unquestioned background assumption from now on, not that it shouldn’t have been “discovered” by this field in a different way.

As a “critical” observation, it stands. It raises a few questions:

  • Is this a problem?
  • If so, for whom?
  • If so, what can be done about it?

Here the “critical” method reaches, perhaps, its limits. Notoriously, critical scholarship plays on its own ambiguity, dancing between the positions of “criticism”, or finding of actionable fault, and “critique”, a merely descriptive account that is at most suggestive of action. This ambiguity preserves the standing of the critical scholar. They need never be wrong.

Responding to the situation revealed by this criticism requires a differently oriented kind of work.

Habermas and human interests

A striking about the world of policy and legal scholarship in the United States is that nobody is incentivized to teach or read anything written by past generations, however much it synthesized centuries of knowledge, and so nothing ever changes. For example, arguably, Habermas’s Knowledge and Human Interests (KHI), originally published 1972, arguably lays out the epistemological framework we would want to understand the managerialism issue raised by recent scholars. We should expect Habermas to anticipate the problems raised by capitalism in the 21st century because his points are based on a meticulously constructed, historically informed, universalist, transcendental form of analysis. This sort of analysis is not popular in the U.S.; I have my theories about why. But I digress.

A key point from Habermas (who is summing up and reiterating a lot of other work originating, if it’s possible to say any such thing meaningfully, in Max Weber) is that it’s helpful to differentiate between different kinds of knowledge based on the “human interests” that motivate them. In one formulation (the one in KHI), there are three categories:

  1. The technical interest (from techne) in controlling nature, which leads to the “empirical-analytic”, or positivist, sciences. These correspond to fields like engineering and the positivist social sciences.
  2. The pragmatic interest (from praxis), in mutual understanding which would guide collective action and the formation of norms, leads to the “hermeneutic” sciences. These correspond to fields like history and anthropology and other homes of “interpretivist” methods.
  3. The emancipatory interest, in exposing what has been falsely reified as objective fact as socially contingent. This leads to the critical sciences, which I suppose corresponds to what is today media studies.

This is a helpful breakdown, though I should say it’s not Habermas’s “mature” position, which is quite a bit more complicated. However, it is useful for the purposes of this post because it tracks the managerialist situation raised by Waldman so nicely.

I’ll need to elaborate on one more thing before applying this to the managerialist framing, which is to skip past several volumes of Habermas’s ouvre and get to Theory of Communicative Action, volume II, where he gets to the punchline. By now he’s developed the socially pragmatic interest to be the basis for “communicative rationality”, a discursive discipline in which individual interests are excluded and instead a diversely perspectival but nevertheless measured conversation about the way the social world should normatively be ordered. But where is this field in actuality? Money and power, the “steering media”, are always mussing up this conversation in the “public sphere”. So “public discourse” becomes a very poor proxy for communicative action. Rather–and this is the punchline–the actually existing field of communicative rationality, which is establishing substantive norms while nevertheless being “disinterested” with respect to the individual participants, is the law. That’s what the legal scholarship is for.

Applying the Habermasian frame to managerialism

So here’s what I think is going on. Waldman is pointing out that whereas regulations are being written with a kind of socially pragmatic interest in their impact on the imagined field of discursively rational participants as represented by legal scholarship, corporate managers are operating in the technical mode in order to, say, maximized shareholder profits as is their legally mandated fiduciary duty. And so the meaning of the regulation changes. Because words don’t contain meaning but rather take their meaning from the field in which they operate. A privacy policy that once spoke to human dignity gets misheard and speaks instead to inconvenience of compliance costs and a PR department’s assessment of the competitive benefit of user’s trust.

I suppose this is bothersome from the legal perspective because it’s a bummer when something one feels is an important accomplishment of one’s field is misused by another. But I find the professional politics here, as everywhere, a bit dull and petty.

Crucially, the managerialism problem is not dumb and petty–I wouldn’t be writing all this if I thought so. However, the frustrating aspect of this discourse is that because of the absence of philosophical grounding in this debate, it misses what’s at stake. This is unfortunately characteristic of much American legal analysis. It’s missing because when American scholars address this problem, they do so primarily in the descriptive critical mode, one that is empirical and in a sense positivist, but without the interest in control. This critical mode leads to cynicism. It rarely leads to collective action. Something is missing.

Morality

A missing piece of the puzzle, one which cannot ever be accomplished through empirical descriptive work, is the establishment of the moral consequence of managerialism which is that human beings are being treated as means and not ends, in contradiction with the Kantian categorical imperative, or something like that. Indeed, it is this flavor of moral maxim that threads its way up through Marx into the Frankfurt School literature with all of its well-trod condemnation of instrumental reason and the socially destructive overreach of private capital. This is, of course, what Habermas was going on about in the first place: the steering media, the technical interest, positivist science, etc. as the enemy of politically legitimate praxis based on the substantive recognition of the needs and rights of all by all.

It would be nice, one taking this hard line would say, if all laws were designed with this kind of morality in mind, and if everybody who followed them did so out of a rationally accepted understanding of their import. That would be a society that respected human dignity.

We don’t have that. Instead, we have managerialism. But we’ve known this for some time. All these critiques are effectively mid 20th century.

So now what?

If the “problem” of managerialism is that when regulations reach the firms that they are meant to regulate their meaning changes into an instrumentalist distortion of the original, one might be tempted to try to combat this tendency with an even more forceful use of hermeneutic discourse or an intense training in the social pragmatic stance such that employees of these companies put up some kind of resistance to the instrumental, managerial mindset. That strategy neglects the very real possibility that those employees that do not embrace the managerial mindset will be fired. Only in the most rarified contexts does discourse propel itself with its one force. We must presume that in the corporate context the dominance of managerialist discourse is in part due to a structural selection effect. Good managers lead the company, are promoted, and so on.

So the angle on this can’t be a discursive battle with the employees of regulated firms. Rather, it has to be about corporate governance. This is incidentally absolutely what bourgeois liberal law ought to be doing, in the sense that it’s law as it applies to capital owners. I wonder how long it will be before privacy scholars begin attending to this topic.

References

Benthall, S. (2015). Designing networked publics for communicative action. Interface1(1), 3.

Bohman, J., & Rehg, W. (2007). Jürgen habermas.

Cohen, J. E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford University Press, USA.

Habermas, J. (2015). Knowledge and human interests. John Wiley & Sons.

Waldman, A. E. (2019). Privacy Law’s False Promise. Washington University Law Review97(3).

Land value taxation

Henry George’s Progress and Poverty, first published in 1879, is dedicated

TO THOSE WHO, SEEING THE VICE AND MISERY THAT SPRING FROM THE UNEQUAL DISTRIBUTION OF WEALTH AND PRIVILEGE, FEEL THE POSSIBILITY OF A HIGHER SOCIAL STATE AND WOULD STRIVE FOR ITS ATTAINMENT

The book is best known as an articulation of the idea of a “Single Tax [on land]”, a circa 1900 populist movement to replace all taxes with a single tax on land value. This view influence many later land reform and taxation policies around the world; the modern name for this sort of policy is Land Value Taxation (LVT).

The gist of LVT is that the economic value of owning land comes both from the land itself and improvements built on top of it. The value of the underlying land over time is “unearned”–it does not require labor to maintain, it comes mainly from the artificial monopoly right over its use. This can be taxed and redistributed without distorting incentives in the economy.

Phillip Bess’s 2018 article provides an excellent summary of the economic arguments in favor of LVT. Michel Bauwen’s P2P Foundation article summaries where it has been successfully in place. Henry George was an American, but Georgism has been largely an export. General MacArthur was, it has been said, a Georgist, and this accounts for some of the land reform in Asian countries after World War II. Singapore, which owns and rents all of its land, is organized under roughly Georgist principles.

This policy is neither “left” nor “right”. Wikipedia has sprouted an article on geolibertarianism, a term that to me seems a bit sui generis. The 75th-anniversary edition of Progress and Poverty, published 1953, points out that one of the promises of communism is land reform, but it argues that this is a false promise. Rather, Georgist land reform is enlightened and compatible with market freedoms, etc.

I’ve recently dug up my copy of Progress and Poverty and begun to read it. I’m interested in mining it for ideas. What is most striking about it, to a contemporary reader, is the earnest piety of the author. Henry George was clearly a quite religious man, and wrote his lengthy and thorough political-economic analysis of land ownership out of a sincere belief that he was promoting a new world order which would preserve civilization from collapse under the social pressures of inequality.