Digifesto

instrumental realism and reproducibility in science and society

In Instrumental Realism, Ihde does a complimentary treatment of Ackerman’s Data, Instruments, and Theory (1985), which is positioned as a rebuttal to Kuhn. It is a defense of the idea of scientific progress, which is so disliked by critical scholarship. The key issue is are relativistic attacks on scientific progression that point out, for example, the ways in which theory shapes observation, which undermines the objectivity of observation. Ackerman’s rebuttal is that science does not progress through advance of theory, but rather through advance of instrumentation. Instruments allow data to be collected independently of theory. This creates and bounds “data domains”–fields of “data text” that can then be the site of scientific controversy and resolution.

The paradigmatic scientific instruments in Ackerman’s analysis are the telescope and the microscope. But it’s worthwhile thinking about what this means for the computational tools of “data science”.

Certainly, there has been a great amount of work done on the design and standardization of computational tools, and these tools work with ever increasing speed and robustness.

One of the most controversial points made in research today is the idea that the design and/or of these computational tools encodes some kind of bias that threatens the objectivity of their results.

One story, perhaps a straw man, for how this can happen is this: the creators of these tools have (perhaps unconscious) theoretical presuppositions that are the psychological encoding of political power dynamics. These psychological biases impact their judgment as they use tools. This sociotechnical system is therefore biased as the people in it are biased.

Ackerman’s line of argument suggests that the tools, if well designed, will create a “data domain” that might be interpeted in a biased way, but that this concern is separable from the design of the tools themselves.

A stronger (but then perhaps even harder to defend) argument would be that the tools themselves are designed in such a way that the data domain is biased.

Notably, the question of scientific objectivity depends on a rather complex and therefore obscure supply chain of hardware and software. Locating the bias in it must be extraordinarily difficult. In general, the solution to handling this complexity must be modularity and standardization: each component is responsible for something small and well understood, which provides a “data domain” available for downstream use. This is indeed what the API design of software packages is doing. The individual components are tested for reproducible performance and indeed are so robust that, like most infrastructure, we take them for granted.

The push for “reproducibility” in computational science is a further example of refinement of scientific instruments. Today, we see the effort to provide duplicable computational environments with Docker containers, with preserved random seeds, and appropriately versioned dependencies, so that the results of a particular scientific project are maintained despite the constant churn of software, hardware, and networks that undergird scientific communication and practice (let alone all the other communication and practice it undergirds).

The fetishization of technology today has many searching for the location of societal ills within the modules of this great machine. If society, running on this machine, has a problem, there must be a bug in it somewhere! But the modules are all very well tested. It is far more likely that the bug is in their composition. An integration error.

The solution (if there is a solution, and if there isn’t, why bother?) has to be to instrument the integration.

Sources of the interdisciplinary hierarchy

Lyotard’s 1979 treatise The Postmodern Condition tells a prescient story about the transformation of the university. He discusses two “metanarratives” used for the organization of universities: the German Humboldt model of philosophy as the central discipline, with all other fields of knowledge radiating out from it; and the French model of the university as the basis of education of the modern democratic citizen. Lyotard argues (perhaps speciously) that because of what the late Wittgenstein had to say about the autonomy of language games (there are no facts; there are only social rules) and because of cybernetics (the amalgamation of exact and applied sciences that had been turned so effectively towards control of human and machine), the metanarratives had lost their legitimacy. There was only “legitimation by performativity”, knowledge proving itself by virtue of its (technical) power, and “legitimiation by paralogy”, knowledge legitimizing itself through semantic disruption, creating pools of confusion in which one could still exist though out-of-alignment with prevailing cybernetic logics.

This duality–between cybernetics and paralogy–excludes a middle term identified in Habermas’s 1968 Knowledge and the Structure of Human Interests. Habermas identifies three “human interests” that motivate knowledge: the technical interest (corresponding to cybernetic performativity), the emancipatory interest (perhaps corresponding to the paralogic turn away from cybernetic performativity), and, thirdly, the hermeneutic interest. The latter is the interest in collective understanding that allows for collective understanding. As Habermas’s work matures, this interest emerges as the deliberative, consensual basis of law.

These frameworks for understanding knowledge and the university share an underlying pragmatism. Both Lyotard and Habermas seem to agree about the death of the Humboldt model: knowledge for its own sake is a deceased metanarrative. Knowledge for democratic citizens, the purportedly French model in Lyotard, was knowledge of shared historical narratives and agreement about norms for Habermas. Lyotard was pessimistic about the resilience of these kinds of norms under the pressure of cybernetics. Indeed, this tension between “smart technology” and “rule of law” remains today, expressed in the work of Hildebrandt. The question of whether technical knowledge threatens or delegitimizes legal/hermeneutic knowledge is still with us today.

These intellectual debates are perhaps ultimately about university politics and academic disciplines. If they are truly _ultimately_ about that, that marks their limitation. For what the pragmatist orientation towards knowledge implies is that knowledge does not exist for its own sake, but rather, in most cases, for its application. Philosophers can therefore only achieve so much by appealing to generalized interests. All real applications are contextualized.

Two questions unanswered by these sources (at least in what is assuredly this impoverished schematic of their arguments) are:

  • Whence the interests and applications that motivate the university as socially and economically situated?
  • What accounts for the tensions between the technical/performative disciplines and the hermeneutic and emancipatory ones?

In 1979, the same publication year of The Postmodern Condition, Pierre Bourdieu published Distinction: A Social Critique of the Judgement of Taste. While not in itself an epistemology, Bourdeiu’s method and conclusions provide a foundation for later studies of science, journalism, and the university. Bourdieu’s insight is that aesthetic taste–in art, in design, in hobbies, etc.–is a manifestation of socioeconomic class understood in terms of a multidimensional matrix of forms of capital–such as economic wealth, but also social status and prestigue, and social capital in knowledge and skills. Those with lots of wealth and low cultural capital–the nouveau riche–will value expensive, conspicuous consumption. Those with low wealth and high cultural capital–academics, perhaps–will value intricate works that require time and training to understand and so on. But these preferences exist to maintain the social structures of (multiply defined) capital accumulation.

A key figure in Bourdieu’s story is that of the petit bourgeoisie, the transitional middle class that has specialized their labor, created perhaps a small business, but has not accumulated capital in a way that secures them in the situation where they aspire to be. In today’s economy, these might include the entrepreneurs–those who would, by their labor, aspirationally transform themselves from laborers into capitalists. They would do this by the creation of technology–the means of productions, capital. Unlike labor applied directly to the creation of goods and services as commodities, capital technologies, commodified through the institution of intellectual property, have the potential to scale in use well beyond the effort of their creation and, through Schumpeterian disruption, make their creators wealthy enough to change their class position. On the other hand, there are those who prefer the academic lifestyle, who luxuriate in the study of literature and critique. Through the institutions of critical academia, these are also jobs that can be won through the accumulation of, in this case social and cultural, capital. By design, these are fields of knowledge that exist for their own sake. There are also, of course, law and social scientific disciplines that are helpful for the cultural formation of politicians, legislators, and government workers of various kinds.

Viewed in this way, we can start to see “human interests” not merely as transcendental features the general human condition, but rather as the expression of class and capital interests. This makes sense given the practical reality of universities getting most of their income through tuition. Students attend universities in order to prepare themselves for careers. The promise of a professional career allows universities to charge higher tuition. Where in the upper classes people choose to compete on intangible cultural capital rather than economic capital, universities maintain specialized disciplinary tracks in the humanities.

Notably, the emancipatory role of the humanities, lauded by Habermas, subtly lampooned (parhaps) by Lyotard, is in other works more closely connected to leisure. As early as 1947, Horkheimer, in Eclipse of Reason, points out that the kind of objective reason he sees as essential to the moral grounding of society that has been otherwise derailed by capitalism relies on leisure time that this a difficult class attainment. In perhaps cynical Bourdieusian terms, the ability to reflect on the world and decide, beyond the restrictions of material demands, on an independent or transcendent system of values is itself a form of cultural accumulation of the most rarified kind. However, as this form of cultural attainment is not connected directly to any means of production, it is perhaps a mystery what grounds it pragmatically.

There’s an answer. It’s philanthropy. The arts and humanities, the idealistic independent policy think tanks, and so on, are funded by those who, having accumulated economic capital and the capacity for leisurely thinking about the potential for a better word, have allocated some portion of their wealth towards “causes”. The competition for legitimacy between and among philanthropic causes is today a major site of politics and ideology. Most obviously, political parties and candidacy run on donations, which is in a sense a form of values-driven philanthropy. The appropriation of state funds, or not, for particular causes becomes a battlefield of all forms of capital at the end of the day

This is all understandable from the perspective that is now truly at the center of the modern university: the perspective of business administration. Ever since Herbert Simon, it has been widely known that the managerialist discipline and computational and cybernetic sciences are closely aligned. The economic sociology of Bourdieu is notable in that it is a successor to the sociology of Marx, but also a successor to the phenomenological approach of Kant, and yet is ultimately consistent with the managerialist view of institutions relying on skilled capital management. Disciplines or sub-disciplines that are peripheral to these core skillsets by virtue of their position in the network of capital flows are marginal by definition.

This accounts for much of interdisciplinary politics and grievance. The social structures described here account for the teleological dependency structure of different forms of knowledge: what it is possible to motivate, and with what. To the extent that a discipline as a matter of methodological commitment is unable to account for this social structure, it will be dependent on its own ability to perpetuate itself autonomously though the stupefication of its students.

There is another form of disciplinary dependency worth mentioning. It cuts the other way: it is the dependency that arises from the infrastructural needs of the knowledge institutions. This instrumental dependency is where this line of reasoning connects with Ihde’s instrumental realism as a philosophy of science. Here, too, there are disciplines that are blind to themselves. To the extent that a discipline is unable to account for the scientific advances necessary for its own work, it survives through the heroics of performative contradiction. There may be cases where an institution has developed enough teleological autonomy to reject the knowledge behind its own instrumentation, but in these cases we must be tempted to consider the knowledge claims of the former to be specious and pretensious. What purpose does fashionable nonsense have, if it rejects the authority of those that it depends on materially? “Those” here referring to those classes that embody the relevant infrastructural knowledge.

The answer is perhaps best addressed using the Bourdieusian insights already addressed: an autonomous field of discourse that denies its own infrastructure is a cultural market designed to establish a distinct form of capital, an expression of leisure. The rejection of performativity, or tenuous and ambiguous connection to it, becomes a class marker; synecdochal with leisure itself, which can then be held up as an esteemable goal. Through Lyotard’s analysis, we can see how a field so constructed might be successful through the rhetorical power of its own paralogic.

What has been lost, through this process, is the metanarrative of the university, most especially of the university as an anchor of knowledge in itself. The pragmatist cybernetic knowledge orientation entails that the university is subsumed to wider systems of capital flows, and the only true guarantee of its autonomy is philanthropic endowment which might perpetuate its ability to develop a form of capital that serves its own sake.

TikTok, Essential Infrastructure, and Imperial Regulation by Golden Share

As Microsoft considers acquiring TikTok’s American operations, President Trump has asked that the Federal Treasury should own a significant share. This move is entirely consistent with this administration’s technology regulation principles, which sees profitable telecommunications and digital services companies as both a cybersecurity attack surface and a prized form of capital that must be “American owned”. Surprising, perhaps, is the idea of partial government ownership. However, this idea has been floated recently by a number of scholars and think tanks. Something like Treasury ownership of company shares could set up institutions that serve not just American economic, but also civic interests.

Jake Goldenfein and I have recently published a piece in Phenomenal World, Essential Infrastructures“. It takes as its cue the recent shift of many “in person” activities onto Zoom during COVID-19 lockdowns to review the prevailing regulatory regimes governing telecommunications infrastructure and digital services. We trace the history of Obama-era net neutrality, grounded in an idea of the Internet as a public utility or essential facility. We then show how in the Trump administration, a new regime based on national and economic security directed Federal policy. We then go into some policy recommendations moving forward.

A significant turning point during the Trump administration has been the shift away from the emphasis on domestic and foreign provision of open Internet in order to provide a competitive market for digital services, and towards the idea that telecom infrastructure and digital services are powerful behemoths that, as critical infrastructure, are vulnerable attack surfaces of the nation but also perhaps the primary form of wealth and source of rents. As any analysis of the stock market, especially since the COVID-19 lockdowns, would tell you, Big Tech has been carrrying the U.S. stock market while other businesses crumble. These new developments continue the trend of the past several years of corporate concentration, and show some of the prescience of the lately hyperactive CFIUS regulatory group, preventing foreign investment in these “critical infrastructure”. This is a defense of American information from foreign investors; it is also a defense of American wealth from competition over otherwise publicly traded assets.

Under the current conditions of markets and corporate structure, which Jake and I analyze in our other recent academic paper, we have to stop looking at “AI” and “data science” as technologies and start looking at them as forms of capital. That is how CFIUS is looking at them. That is how their investors and owners look at them. Many of the well-intentioned debates about “AI ethics” and “technology politics” are eager to keep the conversation in more academically accessible and perhaps less cynical terms. By doing so, they miss the point.

In the “Essential Infrastructures” article, we are struggling with this confluence of the moral/political and the economic. Jake and I are both very influenced by Helen Nissenbaum, who would be quick to point out that when social activities that normally depend on the information affordances of in-person communication go on-line, there is ample reason to suspect that norms will be violated and that the social fabric will undergo an uncomfortable transformation. We draw attention to some of the most personal aspects of life–dating/intimacy, family, religion, and more broadly civil society–which have not depended as much on private capital as infrastructure as they do now. Of course, this is all relative and society has been trending this way for a long time. But COVID lockdowns have brought this condition to a new extreme.

There will be those that argue that there is nothing alarming about every aspect of human life being dependent on private capital infrastructure designed to extract value from them for their corporate owners. Some may find this inevitable, tolerable, even desirable. We wonder who would make a sincere, full-throated defense of this future world. We take a different view. We take as an assumption that the market is one sphere among many and that maintenance of the autonomy of some of the more personal spheres is of moral importance.

Given (because we see no other way about it) that these infrastructures are a form of capital, how can the autonomy of the spheres that depend on them be preserved? In our article, our proposal is that the democratic state provides additional oversight and control over this capital. Recognizing that the state is always an imperfect representative of individuals in their personal domains, it is better than nothing.

We propose that states can engage with infrastructure-as-capital directly as owners and investors, just as other actors interact with it. This proposal accords with other similar proposals for how states might innovate in their creation and maintenance of sovereign wealth since COVID. The editorial for our piece was thorough. Since we drafted it, we have found others who have articulated the logic of the general value of this approach better than we have.

The Berggruen Institute’s Gilman and Feygin (20202) have been active this year in publishing new policy research that is consistent with what we’re proposing. Their proposals for a “mutualist economy” wherein a “national endowment” is built from public investment in technology and intellectual property, which is then either distributed to citizens as Universal Basic Capital or used as a source of wealth by the state, is cool. The Berggruen Institute’s Noema magazine has published the thoughts of Ray Dalio and Joseph Stiglitz about using this approach for corporate bailouts in response to COVID.

These are all good ideas. Our proposal differs only slightly. If national endowment is built from shares in companies that are bailed out during COVID, then the national endowment is unlikely to include those successful FANG companies that are so successfully disrupting and eating the lunch of the companies that are getting bailed out. It would be too bad if the national endowment included only those companies that are failing, while the tech giants on which civil society and state are increasingly dependent attract all the real value to be had.

In our article, we are really proposing that governments–whether federal, state, or even municipal–get themselves a piece of Amazon, Google, and Verizon. The point here is not simply to get more of the profit generated by these firms into democratic coffers. Rather, the point is to shift the balance of power. Our proposal is perhaps more aligned with Hockett and Omarova’s (2017) proposal for a National Investment Authority, and more specifically Omarova’s proposal of a “golden share approach” (2016). Recall that much of the recent activity of CFIUS has been motivated by the understanding that significant shareholders in a private corporation have rights to access information within it. This is why blocking foreign investment in companies has been motivated under a “cybersecurity” rationale. If a foreign owner of, say, Grinder, could extract compromising information from the company in order to blackmail U.S. military personnel, then it could be more difficult to enforce the illegality of that move.

In the United States, there is a legal gap in the regulation of technology companies domestically given their power over personal and civic life. In a different article (2020, June), we argued that technology law and ethics needs to deal with technology as a corporation, rather than a network or assemblage of artifacts and individuals. This is difficult, as these corporations are powerful, directed by shareholders to whom they have a fiduciary duty to maximize profits, and very secretive about their operations. “Sovereign investment”–or, barring that, something similar on a state or local level–would give governments a legal way to review the goings-on in companies that it has a share in. This information access alone could enable further civic oversight and regulatory moves by the government.

When we wrote our article, we did not imagine that soon after it was published the Trump administration would recommend a similar policy for the acquisition of foreign-owned companies that it is threatening to boot off the continent. However, this is one way to get leverage on the problem of how the government can acquire, at low cost, something that is already profitable.

This will likely scare foreign-owned technology companies off of doing business in the U.S. And a U.S.-owned company is likely to fall afoul of other national markets. However, since the Snowden revelations, U.S. companies have been seen, overseas, as extensions of the U.S. state. Schrems II solidifies that view in Europe. Technology markets are already global power-led spheres of influence.

References

Benthall, S., & Goldenfein, J. (2020, June). Data Science and the Decline of Liberal Law and Ethics. In Ethics of Data Science Conference-Sydney.

Gilman, N. and Feygin, Y. (April, 2020), “The Mutualist Economy: A New Deal for Ownership” Whitepaper. Berggruen Institute.

Gilman, N. and Feygin, Y. (June, 2020) “Building Blocks of a National Endowment” Whitepaper. Berggruen Institute.

Hockett, R. C., & Omarova, S. T. (2017). Private Wealth and Public Goods: A Case for a National Investment Authority. J. Corp. L.43, 437.

Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

Omarova, S. T. (2016). Bank Governance and Systemic Stability: The Golden Share Approach. Ala. L. Rev.68, 1029.

Starting to read Schumpeter

I’ve started reading Schumpeter’s Capitalism, Socialism, and Democracy (1942).

Joseph Schumpeter ekonomialaria.jpg
It’s Schumpeter.

Why? Because of the Big Tech anti-trust hearings. I’ve heard that:

(a) U.S. anti-trust policy is based on a theory of monopoly pricing which is not bearing out with todays Big Tech monopolies,

(b) possibly those monopolies are justified on the basis of Schumpeterian “creative destruction” competition, wherein one monopoly gets upended by another in sequence, rather than having many firms competing all at once on the market,

(c) one of the major shots taken at Amazon in the hearings is that it would acquire companies that it saw as a threat, indicating a strategic understanding of Schumpeterian competition on the part of e.g. Bezos, and also how one can maintain a monopolistic position despite that competition,

(d) this idea of capitalism and entrepreneurship seems both fundamentally correct, still somehow formally undertheorized, and tractable with some of the simulation methods I’ve been learning recently with Econ-ARK and NYU’s ABM Lab

All good signs. But who was Schumpeter and what did he think? I can’t really say I know. So I’m returning to my somewhat antiquated method/habit/hobby of Actually Reading the Book.

A few striking things about the book based entirely on its Prefaces (1942, and the later one from 1946):

  • Schumpeter is quite consciously trying to make accurate descriptive claims without normative policy implications, and his kind of annoyed by readers who think he’s doing anything but objective analysis. His enemy is ideology. He apparently gets misunderstood a lot as a result. I think I can hang with this dude.
  • The first section of this book is dedicated to a long treatment of the work of Karl Marx. This opens with the idea that Karl Marx is a great theorist not so much because he’s right or wrong, but because his ideas survive from generation to generation. This view of theoretical greatness prefigures, I think, his view of economic greatness; as an evolutionary battle of competing beings whose success is defined by their Darwinian survival. Schumpeter takes on Marx with great respect. I expect him to be involved in a dismantling of him, though he agrees with Marx that capitalism ends up destroying itself with its accomplishments. He says this as a pro-capitalist, which is interesting.
  • He points out, somewhat amusingly, that Marx is popular (at the time of his writing, the 1940’s) in Russia, where it has been misinterpreted by the Bolsheviks, and for some reason that mystifies him in the United States, but not in the place most deeply familiar with Marx, which is Germany. German socialists, he notes, reason just like economist everywhere else. Since I find that in academic circles Marxist ideas are still fashionable, but other forms of economics, let alone socialist economics, are less so, I have to see Schumpeter as making yet another enduring point here.
  • In the 1946 preface, he mentions an objection by professional economists to his work, which is the objection that while Schumpeter predicts that profits in capitalism will fall over time, this view is critiqued since this does not apparently take into account the return on salesmanship or something like that. Schumpeter then says something interesting: sales is considered as the wages of management. What he’s talking about is the profitability of new goods, new productions methods, new processes, etc: i.e., the sort of stuff that would be actually valuable, directly or indirectly, to consumers. This is interesting. Because given a Herbet Simon view of organizations, management process are precisely what have been changing so dramatically with the “tech economy”–all this AI stuff is really just about streamlining management processes, sales, etc. SO: what does it mean if Schumpeterian competition winds up being nullified by monopolies of managerial power, as opposed to monopolies of something more substantive? This whole complex of information technology and management being produced and marketed as commodities or securities or something else, what we might in a very extended sense call capital markets, is just the sort of thing that neither Marx nor most early economists would get and what actual dominates the economy now. So, let us proceed.

Instrumental realism — a few key points

Continuing my reading of Ihde (1991), I’m getting to the meat of his argument where he compares and constrasts his instrumental realist position with two contemporaries: Heelan (1989), whom Ihde points out is a double doctorate in physics and philosophy and so might be especially capable of philosophizing about physics praxis, and Hacking (1983), who is from my perspective the most famous of the three.

Ihde argues that he, Hacking, and Heelan are all more or less instrumental realists, but that Ihde and Heelan draw more from the phenomenological tradition, which emphasizes embodied perception and action, whereas Hacking is more in the Anglo-American ‘analytic’ tradition of starting from analysis of language. Ihde’s broader argument in the book is one of convergence: he uses the fact that many different schools of thought have arrived at similar conclusions to support the idea that those conclusions are true. That makes perfect sense to me.

Broadly speaking, instrumental realism is a position that unites philosophy of science with philosophy of technology to argue that:

  • That science is able to grasp, understand, theorize the real
  • That this reality is based on embodied perception and praxis. Or, in the more analytic framing, on observation and experiment.
  • That scientific perception and praxis is able to go “beyond” normal, every-day perception and praxis because of its use of scientific instruments, of which the microscope is a canonical example.
  • This position counters many simple relativistic threats to scientific objectivity and integrity, but does so by placing emphasis on scientific tooling. Science advances, mainly, by means of the technologies and infrastructures that it employs.
  • This position is explicitly embodied and materialist, counter to many claims that scientific realism depends on its being disembodied or transcendental.

This is all very promising though there are nuances to work out. Ihde’s study of his contemporaries is telling.

Ihde paints Heelan as a compelling thinker on this topic, though a bit blinkered by his emphasis on physics as the true or first science. Heelean’s view of scientific perception is that it is always both perception and measurement. Being what Ihde calls a “Euro-American” (which I think is quite funny), Ihde can describe him as therefore saying that scientific observation is both a matter of perception-praxis and a matter of hermeneutics–by which I mean the studying of a text in community with others or, to use the more Foucauldean term, “discourse”. Measurement, somewhat implicitly here is a kind of standardized way of “reading”. Ihde makes a big deal out of the subtle differences between “seeing” and “reading”.

To the extent that “discourse”, “hermeneutics”, “reading”, etc. imply a weakness of the scientific standpoint, they weigh against the ‘realism’ of instrumental realism. However, the term measurement is telling in that the difference between, say, different units of measurement of length, mass, time, etc. does not challenge the veracity of the claim “there are 24 hours in a day” because translating between different units is trivial.

Ihde characterizes Hacking as a fellow traveler, converging on instrumental realism when he breaks from his own analytic tradition to point out that experiment is one of the most important features of science, and that experiment depends on and is advanced by instrumentation. Ihde writes that Hacking is quite concerned about “(a) how an instrument is made, particularly with respect to theory-driven design, and (b) the physical processes entailed in the “how” or conditions of use.” Which makes perfect sense to me–that’s exactly what you’d want to scrutinize if you’d taking the ‘realism’ in instrumental realism seriously.

Ihde’s positions here, as the positions of his contemporaries, seem perfectly reasonable to me. I’m quite happy to adopt this view; it corresponds to conclusions I’ve reached in my own reading and practice and it’s nice to have a solid reference and term for it.

The questions that come up next are how instrumental realism applies to today’s controversies about science and technology. Just a handful of notes here:

  • I work quite a bit with scientific sofware. It’s quite clear to me that scientific software development is a major field of scientific instrumentation today. Scientists “see” and “do” via computers and software controls. This has made “data science” a core aspect of 21st century science in general, as it’s the part of science that is closest to the instrumentation. This confirms my long-held view that scientific software communities are the groups to study if you’re trying to understand sociology of science today.
  • On the other hand, it’s becoming increasingly clear in scientific practice that you can’t do software-driven science without the Internet and digital services, and these are now controlled by an oligopoly of digital services conglomerates. The hardware infrastructure–data centers, caching services, telecom broadly speaking, cloud computing hubs–goes far beyond the scientific libraries. Scientific instrumentation depends critically now on mass corporate IT.
  • These issues are compounded by how Internet infrastructure–now privately owned and controlled for all intents and purposes–is also the instrument of so much social science research. Don’t get me started on social media platforms as research tools. For me, the best resource on this is Tufekci, 2014.
  • The most hot-button, politically charged critique in the philosophy of science space is that science and/or data science and/or AI as it is currently constituted is biased because of who is represented in these research communities. The position being contested is the idea that AI/data science/computational social science etc. is objective because it is designed in a way that aligns with mathematical theory.
    • I would be very interested to read something connecting postcolonial, critical race, and feminist AI/data science practices to instrumental realism directly. I think these groups ought to be able to speak to each other easily, since the instrumental realists from the start are interested in the situated embodiment of the observer.
    • On the other hand, I think it would be difficult for the critical scholars to find fault in the “hard core” of data science/computing/AI technologies/instruments because, truly, they are designed according to mathematical theory that is totally general. This is what I think people mean when they say AI is objective because it’s “just math”. AI/data science praxis makes you sensitive to what aspects of the tooling are part of the core (libraries of algorithms, based on vetted mathematical theorems) and what are more incidental (training data sets, for example, or particular parameterizations of the general algorithms). If critical scholars focused on these parts of the scientific “stack”, and didn’t make sweeping comments that sound like they implicate the “core”, which we have every reason to believe is quite solid, they would probably get less resistance.
    • On the other hand, if science is both a matter of perception-praxis and hermeneutics, then maybe the representational concerns are best left on the hermeneutic side of the equation.

References

Hacking, I. (1983). Representing and Intervening: Introductory Topics in the Philosophy of Natural Science.

Heelan, P. A. (1989). Space-perception and the philosophy of science. Univ of California Press.

Ihde, D. (1991). Instrumental realism: The interface between philosophy of science and philosophy of technology (Vol. 626). Indiana University Press.

Tufekci, Z. (2014, May). Big questions for social media big data: Representativeness, validity and other methodological pitfalls. In Eighth International AAAI Conference on Weblogs and Social Media.

Considering Agre: More Merleau-Ponty, less Heidegger, in technology design, please

I’ve had some wonderful interlocutors lately. One (in private, and therefore anonymously) has recommended Don Ihde’s postphenomenology of science. I’ve been reading and enjoying Ihde’s Instrumental Realism (1991) and finding it very fruitful. Ihde is influential in some contemporary European theories of the interaction between law and technology. Tapan Parikh has (on Twitter) asked me why I haven’t been engaging more with Agre (e.g., 1997). I’ve been reminded by him and others of work in “critical HCI”, a field I encountered a lot in graduate school, which has its roots in, perhaps, Suchman (1987).

I don’t like and have never liked critical HCI and have resented its pretensions of being “more ethical” than other fields of technological design and practice for many years. I state this as a psychological fact, not as an objective judgment of the field. This morning I’m taking a moment to meditate on why I feel this way, and what that means for my work.

Agre (1997) has some telling anecdotes about being an AI researcher at MIT and becoming disillusioned upon encountering phenomenology and ethnomethodological work. His problem began with a search for originality.

My college did not require me to take many humanities courses, or learn to write in a professional register, and so I arrived in graduate school at MIT with little genuine knowledge beyond math and computers. …

My lack of a liberal education, it turns out, was only half of my problem. Only much later did I understand the other half, which I attribute to the historical constitution of AI as a field. A graduate student is responsible for finding a thesis topic, and this means doing something new. Yet I spent much of my first year, and indeed the next couple of years after my time away, trying very hard in vain to do anything original. Every topic I investigated seemed driven by its own powerful internal logic into a small number of technical solutions, each of which had already been investigated in the literature. …

Often when I describe my dislike for e.g. Latour, people assume that I’m on a similar educational path to Agre’s: that I am a “technical person”, perhaps with a “mathematical mind”, that I’ve never encountered any material that would challenge what has now solidified as the STEM paradigm.

That’s a stereotype that does not apply to me. For better or for worse, I had a liberal arts undergraduate education with exposure to technical subjects, social sciences, and the humanities. My graduate school education was similarly interdisciplinary.

There are people today who are advocates of critical HCI and design practices in the tradition of Suchman, Agre, and so on that have a healthy exposure to STEM education. There are also many who do not and employ this material as a kind of rear guard action to treat any less “critical” work as intrinsically tainted with the same hubris that the AI field did in, say, the 80’s. This is ahistorical and deeply frustrating. These conversations tend to end when the “critical” scholar insists on the phenomenological frame–arguing either implicitly or explicitly that (post-)positivism is unethical in and of itself.

It’s worth tracing the roots of this line of reasoning. Often, variations of it are deployed rhetorically in service of the cause of bringing greater representation of marginalized people into the field of technical design. It’s somewhat ironic that, as Duguid (2012) helpfully points out, this field of “critical” technology studies, drawing variously on Suchman, Dreyfus, Agre, and ultimately Latour and Woolgar, is ultimately Heidegger. Heidegger’s affiliation with Nazism is well-known, boring, and in no way a direct refutation of the progressive deployments for critical design.

But back to Agre, who goes on to discuss his conversion to phenomenology. Agre’s essay is largely an account of his rejection of the project of technical creation as a goal.

… I was unable to turn to other, nontechnical fields for inspiration. … The problem was not exactly that I could not understand the vocabulary, but that I insisted on trying to read everything as a narration of the workings of a mechanism. By that time much philosophy and psychology had adopted intellectual styles similar to that of AI, and so it was possible to read much that was congenial — except that it reproduced the same technical schemata as the AI literature. …

… I was also continually noticing the many small transformations that my daily life underwent as a result of noticing these things. As my intuitive understanding of the workings of everyday life evolved, I would formulate new concepts and intermediate on them, whereupon the resulting spontaneous observations would push my understanding of everyday life even further away from the concepts that I had been taught. … It is hard to convey the powerful effect that this experience had upon me; my dissertation (Agre 1988), once I finally wrote it, was motivated largely by a passion to explain to my fellow AI people how our AI concepts had cut us off from an authentic experience of our own lives. I still believe this.

Agre here is connecting the hegemony of cognitive psychology and AI in whenever he is writing about to his realization that “authentic experience” had been “cut off”. This is so Heideggerean. Agre is basically telling us that he independently came to Heidegger’s conclusions because of his focus on “everyday life”.

This binary between “everyday life” or “lived experience” on the one hand and the practice of AI design is repeated often by critical scholars today. Critical scholars with no practical experience in contemporary data science often assume that the AI of the 80’s is the same as machine learning practice today. This is an unsupported assumption directly contradicted by the lived experience of those who work in technical fields. Unfortunately, the success of the Heideggerean binary allows those whose lived experience is “not technical” to claim that their experience has a kind of epistemic or ethical priority, due to its “authenticity”, over more technical experience.

This is devastating for the discourse around now ubiquitous and politically vital topics around the politics of technology. If people have to choose between either doing technical work or doing critical Heideggerean reflection on that work, then by definition all technical work is uncritical and therefore lacking in the je ne se quoi that gives it “ethical” allure. In my view, this binary is counterproductive. If “criticality” never actually meets technical practice, then it can never be a way to address problems caused by poor technical design. Rather, it can only be a form of institutional sublimation of problematic technical practices. The critical field is sustained by, parasitic on, bad technical design: if the technology were better, then the critical field would not be able to feed so successful on the many frustrations and anxieties of those that encounter it.

Agre ultimately gives up on AI to go critical full time.

… My purpose here, though, is to describe how this experience led me into full-blown dissidence within the field of AI. … In order to find words for my newfound intuitions, I began studying several nontechnical fields. Most importantly, I sought out those people who claimed to be able to explain what is wrong with AI, including Hubert Dreyfus and Lucy Suchman. They, in turn, got me started reading Heidegger’s Being and Time (1961 [1927]) and Garfinkel’s Studies in Ethnomethodology (1984 [1967]). At first I found these texts impenetrable, not only because of their irreducible difficulty but also because I was still tacitly attempting to read everything as a specification for a technical mechanism. That was the only protocol of reading that I knew, and it was hard even to conceptualize the possibility of alternatives. (Many technical people have observed that phenomenological texts, when read as specifications for technical mechanisms, sound like mysticism. This is because Western mysticism, since the great spiritual forgetting of the later Renaissance, is precisely a variety of mechanism that posits impossible mechanisms.) My first intellectual breakthrough came when, for reasons I do not recall, it finally occurred to me to stop translating these strange disciplinary languages into technical schemata, and instead simply to learn them on their own terms.

What’s quite frustrating for somebody who is approaching this problem from a slightly broader liberal arts background than Agre did is that he is writing about encounters with only one of several different phenomenological traditions–the Heideggerean one–that have made it so successfully into American academic HCI.

This is where Don Ihde’s work is great: he is explicitly engaged in a much wider swathe of the Continental cannon. In doing so, he goes to the root of phenomenology, Husserl, and, I believe most significantly, Merleau-Ponty.

Merleau-Ponty’s Phenomenology of Perception is the kind of serious, monumental work that nobody in the U.S. bothers to read because it is difficult for them to think about. When humanities education is a form of consumerism, it’s much more fun to read, I don’t know, Haraway. But as a theoretical work that combines the phenomenological tradition with empirical psychology in a way that is absolutely and always about embodiment–all the particularities of being a body and what that means for our experiences of the world–you can’t beat him.

Because Merleau-Ponty is engaged mainly with perception and praxis, rather than hermeneutics (the preoccupation of Heidegger), he is able to come up with a much more muscular account of lived experience with machines without having to dress it up in terminology about ‘cyborgs’. This excerpt, from Ihde, is illustrative:

The blind man’s tool has ceased to be an object for him, and is no longer perceived for itself; its point has become an area of sensitivity, extending the scope and active radius of touch, and providing a parallel to sight. In th exploration of things, the length of the stick does not enter expressly as a middle term: The blind man is rather aware of it through the position of objects than the position of objects through it.

In my view, it’s Merleau-Ponty’s influence that most sets up Ihde to present a productive view of instrumental realism in science, based on the role of instruments in the perception and praxis of science. This is what we should be building on when we discuss the “philosophy of data science” and other software-driven research.

Dreyfus’s (1976) famous critique of AI drew a lot on Merleau-Ponty. Dreyfus is not brought up very much in the critical literature any more because (a) many of his critiques were internalized by the AI community and led to new developments that don’t fall prey to the same criticisms, (b) people are building all kinds of embodied robots now, and (c) the “Strong AI” program, of building AI that is so much like a human mind, has not been what’s been driving AI recently: industrial applications that scale far beyond the human mind are.

So it may be that Merleau-Ponty is not used as a phenomenological basis for studying AI and technology now because it is both successfully about lived experience but it does not imply that the literature of some more purely hermeneutic field of inquiry is separately able to underwrite the risks of technical practice. If instruments are an extension of the body, then that implies that the one who uses those instruments in responsible for them. That would imply that, for example, Zuckerberg is not an uncritical technologist who has built an autonomous system that is poorly designed because of the blind spots of engineering practice, but rather than he is the responsible actor leading the assemblage that is Facebook as an extension of himself.

Meanwhile, technical practice (I repeat myself) has changed. Agre laments that “[f]ormal reason has an unforgiving binary quality — one gap in the logic and the whole thing collapses — but this phenomenological language was more a matter of degree”. Indeed, when AI was developing along the lines of “formal reason” in the sense of axiomatic logic, this constraint would be frustrating. But in the decades since Agre was working, AI practice has become much more a “matter of degree”: it is highly statistical and probabilistic, depending on very broadly conceived spaces of representation that tune themselves based on many minute data points. Given the differences between “good old fashioned AI” based on logical representation and contemporary machine learning, it’s just bewildering when people raise these old critiques as if they are still meaningful and relevant to today’s practice. And yet the themes resurface again and again in the pitch battles of interdisciplinary warfare. The Heideggereans continue to renounce mathematics, formalism, technology, etc. as a practice in itself in favor of vague humanism. There’s a new articulation of these agenda every year, under different political guises.

Telling is how Agre, who began the journey trying to understand how to make a contribution to a technical field, winds up convincing himself that there are a lot of great academic papers to be written with no technical originality or relevance.

When I tried to explain these intuitions to other AI people, though, I quickly discovered that it is useless to speak nontechnical languages to people who are trying to translate these languages into specifications for technical mechanisms. This problem puzzled me for years, and I surely caused much bad will as I tried to force Heideggerian philosophy down the throats of people who did not want to hear it. Their stance was: if your alternative is so good then you will use it to write programs that solve problems better than anybody else’s, and then everybody will believe you. Even though I believe that building things is an important way of learning about the world, nonetheless I knew that this stance was wrong, even if I did not understand how.

I now believe that it is wrong for several reasons. One reason is simply that AI, like any other field, ought to have a space for critical reflection on its methods and concepts. Critical analysis of others’ work, if done responsibly, provides the field with a way to deepen its means of evaluating its research. It also legitimizes moral and ethical discussion and encourages connections with methods and concepts from other fields. Even if the value of critical reflection is proven only in its contribution to improved technical systems, many valuable criticisms will go unpublished if all research papers are required to present new working systems as their final result.

This point is echoed almost ten years later by another importer of ethnomethodological methods into technical academia, Dourish (2006). Today, there are academic footholds for critical work about technology, and some people write a lot of papers about it. More power to them, I guess. There is a now a rarified field of humanities scholarship in this tradition.

But when social relations truly are mediate by technology in myriad ways, it is perhaps not wrong to pursue lines of work that have more practical relevance. Doing this requires, in my view, a commitment to mathematical rigor and getting ones hands “dirty” with the technology itself, when appropriate. I’m quite glad that there are venues to pursue these lines now. I am somewhat disappointed and annoyed that I have to share these spaces with Heideggereans, who I just don’t see as adding much beyond the recycling of outdated tropes.

I’d be very excited to read more works that engage with Merleau-Ponty and work that builds on him.

References

Agre, P. E. (1997). Lessons learned in trying to reform AI. Social science, technical systems, and cooperative work: Beyond the Great Divide (1997)131. (link)

Dourish, P. (2006, April). Implications for design. In Proceedings of the SIGCHI conference on Human Factors in computing systems (pp. 541-550).

Duguid, P. (2012). On Rereading Suchman and Situated Action. Le Libellio d’AEGIS8(2), 3-11.

Dreyfus, H. (1976). What computers can’t do.

Ihde, D. (1991). Instrumental realism: The interface between philosophy of science and philosophy of technology (Vol. 626). Indiana University Press.

Suchman, L. A. (1987). Plans and situated actions: The problem of human-machine communication. Cambridge university press.

Winograd, T. & Flores, F. (1986). Understanding computers and cognition: A new foundation for design. Intellect Books.

Is there hypertext law? Is there Python law?

I have been impressed with Hildbebrandt’s analysis of the way particular technologies provide the grounds for different forms of institutions. Looking into the work of Don Ihde, who I gather is a pivotal thinking in this line of reasoning, I find the ‘postphenomenological’ and ‘instrumental realist’ position very compelling. Lawrence Diver’s work on digisprudence, which follows in this vein, looks generative.

In my encounters with with work, I have also perceived there to be gaps and discrepancies in the texture of the argument. There is something uncanny about reading material that is, perceptually, almost correct. Either I am in error, or it is.

One key difference seems to be about the attitude towards mathematical or computational formalism. This is chiefly, I sense, truly an attitude, in the sense of emotional difference. Scholars in this area will speak, in personal communication, of being “wary” or “afraid”. It’s an embodied reaction which orients their rhetoric. It is shared with many other specifically legal scholars. In the gestalt of these arguments, the legal scholar will refer to philosophies of science and/or technology to justify a distance between lived reality, lifeworld, and artifice.

Taking a somewhat different perspective, there are other ways to consider the relationship between formalism, science, and fact, even when taking seriously the instrumental realist position. It is noteworthy, I believe, that this field of scholarship is so adamantly Latourian, and that Latour has succeeded in anathematizing Bourdieu. I now see more clearly how Science of Science and Reflexivity, which was both a refutation of Latour and a lament of how the capture of institutional power (such as nation-state provided research funding) is a distortion to the autonomous and legitimizing processes of science, are really all one argument. Latour, despite the wrongness of so much of his early work which is now so widely cited, became a powerful figure. The better argument will only win in time.

Bourdieu, it should be noted, is an instrumental realist about science, though he may not have been aware of Ihde and that line of discourse. He also saw the connection between formalism and instrumentation which seems to elude the postphenomenologist legal scholars. Formalism and instrumentation are both a form of practical “automation” which, if we take the instrumental realists seriously (and we should) wind up enabling the body, understood as perception-praxis, to see and know in different ways. Bourdieu, who obviously has read Foucault but improves on him, accepts the perception-praxis view of the body and socializes it through the concept of the habitus, which is key to his analysis of the sociology of science.

But I digress. What I have been working towards is the framing of the questions in the title. To recap, Hildebrandt, in my understanding, makes a compelling case for how the printing press, as a technology, has had specific affordances that have enabled the Rule of Law that is characteristic of constitutional democracy. This Rule of Law, or some descendent of it, remains dominant in Europe and perhaps this is why, via the Brussells Effect, the EU now stands as the protector of individuals from the encroaching power of machine-learning powered technologies, in the form of Information and Communication Infrastructure (ICI).

This is a fine narrative, though perhaps rather specifically motivated by a small number of high profile regulatory acts. I will not suggest that the narrative overplays anybody’s hand; it is useful as a schematic.

However, I am not sure the analysis is so solid. There seem to be some missing steps in the historical analysis. Which brings me to my first question, which is: what about hypertext? Hypertext is neither the text of the printing press, nor is it a form of machine learning. It is instrumentally dependent on scientific and technological formalism: the HyperText Markup Language (HTML) and the HyperText Transfer Protocol are both formal standards, built instrumentally on a foundation of computation and networking theory and technology. And as a matter of contemporary perception and praxis, it is probably the primary way in which people engage in analysis of law and communication about the law today.

So, what about it? Doesn’t this example show a contradiction at the heart of this instrumental realist legal scholarship?

The follow-up question is about another class of digital “languages”: software source code. Python, for example. These, even more than HyperText, are formalism, with semantics guaranteed by a compiler. But these semantics are in a sense legislated via the Python Enhancement Proposal process, and of course any particular Python application or software practice may be designed and mandated through a wide array of institutional mechanisms before being deployed to users.

I would look forward to work on these subjects coming from Hildebrandt’s CUHOBICOL research group, but for the fact that these technologies (which may bely the ideology motivating the project!) are excluded by the very system of categories the project invokes to classify different kinds of regulatory systems. According to the project web site (written, like all web sites, in HyperText), there are three (only three?) kinds of normativity: text-driven normativity, based in the printing press; data-based normativity, the normativity of feedback once based in cybernetic engineering and now based in machine learning; and code-based normativity. The last category is defined in terms of code’s immutability, which is rather alien to anybody who writes software code and has to deal with how it changes all the time. Moreover, the project’s aim is to explore code-based normativity through blockchain applications. I understand that gesturing at blockchain technology is a nice way to spice up a funding proposal. But by seeing normativity in these terms, many intermediate technologies, and therefore a broad technical design space of normative technology, are excluded from analysis.

On descent-based discrimination (a reply to Hanna et al. 2020)

In what is likely to be a precedent-setting case, California regulators filed a suit in the federal court on June 30 against Cisco Systems Inc, alleging that the company failed to prevent discrimination, harassment and retaliation against a Dalit engineer, anonymised as “John Doe” in the filing.

The Cisco case bears the burden of making anti-Dalit prejudice legible to American civil rights law as an extreme form of social disability attached to those formerly classified as “Untouchable.” Herein lies its key legal significance. The suit implicitly compares two systems of descent-based discrimination – caste and race – and translates between them to find points of convergence or family resemblance.

A. Rao, link

There is not much I can add to this article about caste-based discrimination in the U.S. In the law suit, a team of high caste South Asians in California is alleged to have discriminated against a Dalit engineer coworker. The work of the law suit is to make caste-based discrimination legible to American civil rights law. It, correctly, in my view, draws the connection to race.

This illustrative example prompts me to respond to Hanna et al.’s 2020 “Towards a critical race methodology in algorithmic fairness.” This paper by a Google team included a serious, thoughtful consideration of the argument I put forward with my co-author Bruce Haynes in “Racial categories in machine learning”. I like the Hanna et al. paper, think it makes interesting and valid points about the multidimensionality of race, and am grateful for their attention to my work.

I also disagree with some of their characterization of our argument and one of the positions they take. For some time I’ve intended to write a response. Now is a fine time.

First, a quibble: Hanna et al. describe Bruce D. Haynes as a “critical race scholar” and while he may have changed his mind since our writing, at the time he was adamant (in conversation) that he is not a critical race scholar, but that “critical race studies” refers to a specific intellectual project of racial critique that just happens to be really trendy on Twitter. There are lots and lots of other ways to study race critically that are not “critical race studies”. I believe this point was important to Bruce as a matter of scholarly identity. I also feel that it’s an important point because, frankly, I don’t find a lot of “critical race studies” scholarship persuasive and I probably wouldn’t have collaborated as happily with somebody of that persuasion.

So that fact that Hanna et al. explicitly position their analysis in “critical race” methods is a signpost that they are actually trying to accomplish a much more specifically disciplinarily informed project than we were. Sadly, they did not get into the question of how “critical race methodology” differs from other methodologies one might use to study race. That’s too bad, as it supports what I feel is a stifling hegemony that particular discourse has over discussions of race and technology.

The Google team is supportive of the most important contribution of our paper–that racial categories are problematic and that this needs to be addressed in the fairness in AI literature. They then go on to argue against out proposed solution of “using an unsupervised machine learning method to create race-like categories which aim to address “historical racial segregation with reproducing the political construction of racial categories.”” (their rendering). I will defend our solution here.

Their first claim:

First, it would be a grave error to supplant the existing categories of race with race-like categories inferred by unsupervised learning methods. Despite the risk of reifying the socially constructed idea called race, race does exist in the world, as a way of mental sorting, as a discourse which is adopted, as a social thing which has both structural and ideological components. In other words, although race is social constructed, race still has power. To supplant race with race-like categories for the purposes of measurement sidesteps the problem.

This paragraph does feel very “critical race studies” to me, in that it makes totalizing claims about the work race does in society in a way that precludes the possibility of any concrete or focused intervention. I think they misunderstand our proposal in the following ways:

  • We are not proposing that, at a societal and institutional level, we institute a new, stable system of categories derived from patterns of segregation. We are proposing that, ideally, temporary quasi-racial categories are derived dynamically from data about segregation in a way that destabilizes the social mechanisms that reproduce racial hierarchy, reducing the power of those categories.
  • This is proposed as an intervention to be adopted by specific technical systems, not at the level of hegemonic political discourse. It is a way of formulating an anti-racist racial project by undermining the way categories are maintained.
  • Indeed, the idea is to sidestep the problem, in the sense that it is an elegant way to reduce the harm that the problem does. Sidestepping is, imagine it, a way of avoiding a danger. In this case, that danger is the reification of race in large scale digital platforms (for example).

Next, they argue:

Second, supplanting race with race-like categories depends highly on context, namely how race operates within particular systems of inequality and domination. Benthall and Haynes restrict their analysis to that of spatial segregation, which is to be sure, an important and active research area and subject of significant policy discussion (e.g. [76, 99]). However, that metric may appear illegible to analyses pertaining to other racialized institutions, such as the criminal justice system, education, or employment (although one can readily see their connections and interdependencies). The way that race matters or pertains to particular types of structural inequality depends on that context and requires its own modes of operationalization

Here, the Google team takes the anthropological turn and, like many before them, suggests that a general technical proposal is insufficient because it is not sufficiently contextualized. Besides echoing the general problem of the ineffectualness of anthropological methods in technology ethics, they also mischaracterize our paper by saying we restrict our analysis to spatial segregation. This is not true: in the paper we generalize our analysis to social segregation, as in on a social network graph. Naturally, we would be (a) interested in and open to other systems of identifying race as a feature of social structure, and (b) would want to tailor data over which any operationalization technique was applied, where appropriate, to technical and functional context. At the same time, we are on quite solid ground in saying that racial is structural and systemic, and in a sense defined at a holistic societal level as much as it has ramifications in, and is impacted by, the micro- and contextual level as well. As we are approaching the problem from a structural sociological one, we can imagine a structural technical solution. This is an advantage of the method over a more anthropological one.

Third:

At the same time we focus on the ontological aspects of race (what is race, how is it constituted and imagined in the world), it is necessary to pay attention to what we do with race and measures which may be interpreted as race. The creation of metrics and indicators which are race-like will still be interpreted as race.

This is a strange criticism given that one of the potential problems with our paper is that the quasi-racial categories we propose are not interpretable. The authors seem think that our solution involves the institution of new quasi-racial categories at the level of representation or discourse. That’s not what we’ve proposed. We’ve proposed a design for a machine learning system which, we’d hope, would be understood well enough by its engineers to work as an intervention. Indeed, the correlation of the quasi-racial categories with socially recognized racial ones is important if they are to ground fairness interventions; the purpose of our proposed solution is narrowly to allow for these interventions without the reification of the categories.

Enough defense. There is a point the Google team insists on which strikes me as somewhat odd and to me signals a further weakness of their hyper contextualized method: its inability to generalize beyond the hermeneutic cycles of “critical race theory”.

Hanna et al. list several (seven) different “dimensions of race” based on different ways race can be ascribed, inferred, or expressed. There is, here, the anthropological concern with the individual body and its multifaceted presentations in the complex social field. But they explicitly reject one of the most fundamental ways in which race operates at a transpersonal and structural level, which is through families and genealogy. This is well-intentioned but ultimately misguided.

Note that we have excluded “racial ancestry” from this table. Genetics, biomedical researchers, and sociologists of science have criticized the use of “race” to describe genetic ancestry within biomedical research [40, 49, 84, 122], while others have criticized the use of direct-to-consumer genetic testing and its implications for racial and ethnic identification [15, 91, 113]

In our paper, we take pains to point out responsibly how many aspects of racial, such as phenotype, nationality (through citizenship rules), and class signifiers (through inheritance) are connected with ancestry. We, of course, do not mean to equate ancestry with race. Nor, especially, are we saying that there are genetic racialized qualities besides perhaps those associated with phenotype. We are also not saying that direct-to-consumer genetic test data is what institutions should be basing their inference of quasi-racial categories on. Nothing like that.

However, speaking for myself, I believe that an important aspect of how race functions at a social structural level is how it implicates relations of ancestry. A. Rao perhaps puts the point better: race is a system of inherited privilege, and racial discrimination is more often than not discrimination based on descent.

Understanding this about race allows us to see what race has in common with other systems of categorical inequality, such as the caste system. And here was a large part of the point of offering an algorithmic solution: to suggest a system for identifying inequality that transcends the logic of what is currently recognized within the discourse of “critical race theory” and anticipates forms of inequality and discrimination that have not yet been so politically recognized. This will become increasingly an issue when a pluralistic society (or user base of an on-line platform) interacts with populations whose categorical inequalities have different histories and origins besides the U.S. racial system. Though our paper used African-Americans as a referent group, the scope of our proposal was intentionally much broader.

References

Benthall, S., & Haynes, B. D. (2019, January). Racial categories in machine learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 289-298).

Hanna, A., Denton, E., Smart, A., & Smith-Loud, J. (2020, January). Towards a critical race methodology in algorithmic fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 501-512).

Notes about “Data Science and the Decline of Liberal Law and Ethics”

Jake Goldenfein and I have put up on SSRN our paper, “Data Science and the Decline of Liberal Law and Ethics”. I’ve mentioned it on this blog before as something I’m excited about. It’s also been several months since we’ve finalized it, and I wanted to quickly jot some notes about it based on considerations going into it and since then.

The paper was the result of a long and engaged collaboration with Jake which started from a somewhat different place. We considered the question, “What is sociopolitical emancipation in the paradigm of control?” That was a mouthful, but it captured what we were going for:

  • Like a lot of people today, we are interested in the political project of freedom. Not just freedom in narrow, libertarian senses that have proven to be self-defeating, but in broader senses of removing social barriers and systems of oppression. We were ambivalent about the form that would take, but figured it was a positive project almost anybody would be on board with. We called this project emancipation.
  • Unlike a certain prominent brand of critique, we did not begin from an anthropological rejection of the realism of foundational mathematical theory from STEM and its application to human behavior. In this paper, we did not make the common move of suggesting that the source of our ethical problems is one that can be solved by insisting on the terminology or methodological assumptions of some other discipline. Rather, we took advances in, e.g., AI as real scientific accomplishments that are telling us how the world works. We called this scientific view of the world the paradigm of control, due to its roots in cybernetics.

I believe our work is making a significant contribution to the “ethics of data science” debate because it is quite rare to encounter work that is engaged with both project. It’s common to see STEM work with no serious moral commitments or valence. And it’s common to see the delegation of what we would call emancipatory work to anthropological and humanistic disciplines: the STS folks, the media studies people, even critical X (race, gender, etc.) studies. I’ve discussed the limitations of this approach, however well-intentioned, elsewhere. Often, these disciplines argue that the “unethical” aspect of STEM is because of their methods, discourses, etc. To analyze things in terms of their technical and economic properties is to lose the essence of ethics, which is aligned with anthropological methods that are grounded in respectful, phenomenological engagement with their subjects.

This division of labor between STEM and anthropology has, in my view (I won’t speak for Jake) made it impossible to discuss ethical problems that fit uneasily in either field. We tried to get at these. The ethical problem is instrumentality run amok because of the runaway economic incentives of private firms combined with their expanded cognitive powers as firms, a la Herbert Simon.

This is not a terribly original point and we hope it is not, ultimately, a fringe political position either. If Martin Wolf can write for the Financial Times that there is something threatening to democracy about “the shift towards the maximisation of shareholder value as the sole goal of companies and the associated tendency to reward management by reference to the price of stocks,” so can we, and without fear that we will be targeted in the next red scare.

So what we are trying to add is this: there is a cognitivist explanation for why firms can become so enormously powerful relative to individual “natural persons”, one that is entirely consistent with the STEM foundations that have become dominant in places like, most notably, UC Berkeley (for example) as “data science”. And, we want to point out, the consequences of that knowledge, which we take to be scientific, runs counter to the liberal paradigm of law and ethics. This paradigm, grounded in individual autonomy and privacy, is largely the paradigm animating anthropological ethics! So we are, a bit obliquely, explaining why the the data science ethics discourse has gelled in the ways that it has.

We are not satisfied with the current state of ‘data science ethics’ because to the extent that they cling to liberalism, we fear that they miss and even obscure the point, which can best be understood in a different paradigm.

We left as unfinished the hard work of figuring out what the new, alternative ethical paradigm that took cognitivism, statistics, and so on seriously would look like. There are many reasons beyond the conference publication page limit why we were unable to complete the project. The first of these is that, as I’ve been saying, it’s terribly hard to convince anybody that this is a project worth working on in the first place. Why? My view of this may be too cynical, but my explanations are that either (a) this is an interdisciplinary third rail because it upsets the balance of power between different academic departments, or (b) this is an ideological third rail because it successfully identifies a contradiction in the current sociotechnical order in a way that no individual is incentivized to recognize, because that order incentivizes individuals to disperse criticism of its core institutional logic of corporate agency, or (c) it is so hard for any individual to conceive of corporate cognition because of how it exceeds the capacity of human understanding that speaking in this way sounds utterly speculative to a lot of fo people. The problem is that it requires attributing cognitive and adaptive powers to social forms, and a successful science of social forms is, at best, in the somewhat gnostic domain of complex systems research.

The latter are rarely engaged in technology policy but I think it’s the frontier.

References

Benthall, Sebastian and Goldenfein, Jake, Data Science and the Decline of Liberal Law and Ethics (June 22, 2020). Ethics of Data Science Conference – Sydney 2020 (forthcoming). Available at SSRN: https://ssrn.com/abstract=

from morality to economics: some stuff about Marx for Tapan Parikh

I work on a toolkit for heterogeneous agent structural modeling in Economics, Econ-ARK. In this capacity, I work with the project’s creators, who are economists Chris Carroll and Matt White. I think this project has a lot of promise and am each day more excited about its potential.

I am also often in academic circles where it’s considered normal to just insult the entire project of economics out of hand. I hear some empty, shallow snarking economists about once every two weeks. I find this kind of professional politics boring and distracting. It’d also often ignorant. I wanted to connect a few dots to try to remedy the situation, while also noting some substantive points that I think fill out some historical context.

Tracking back to this discussion of morality in the Western philosophical tradition and what challenges it today, the focal character there was Immanuel Kant, who for the sake of argument espoused a model of morality based on universal properties of a moral agent.

Tapan Parikh has argued (in personal communications) that I am “a dumb ass” for using Kant in this way, because Kant is on the record for writing some very racist things. I feel I have to address this point. No, I’m not going to stop working with the ideas from the Western philosophical canon just because so many of them were racist. I’m not a cancel culturist in any sense. I agree with Dave Chappelle on the subject of Louis C.K., for example.

However, it is actually essential to know whether or not racism is a substantive, logical problem with Kant’s philosophy. I’ll defer to others on this point. A quick Googling of the topic seems to indicate that either: Kant was inconsistent, and was a racist while also espousing universalist morality, and that tells us more about Kant the person than it does about universalist morality–the universalist morality transcending Kant’s human failings in this case (Allais, 2016) or Kant actually became less racist during the period in which he was most philosophically productive, which was late in his life (Kleingeld, 2007). I like this latter story better: Kant, being an 18th century German, was racist as hell; then he thought about it a bit harder, developed a universalist moral system, and because, as a consequence, less racist. That seems to be a positive endorsement of what we now call Kantian morality, which is a product of that later period and not the earlier virulently racist period.

Having hopefully settled that question, or at least smoothed it over sufficiently to move on, we can build in more context. Everybody knows this sequence:

Kant -> Hegel -> Marx

Kant starts a transcendent dialectic as a universalist moral project. Hegel historicizes that dialectic, in the process taking into serious consideration the Haitian rebellion, which inspires his account of the Master/Slave dialectic, which is quite literally about slavery and how it is undone by its internal contradictions. The problem, to make a long story short, is that the Master winds up being psychologically dependent on the Slave, and this gives the Slave power over the Master. The Slave’s rebellion is successful, as has happened in history many times. This line of thinking results in, if my notes are right (they might not be) Hegel’s endorsement of something that looks vaguely like a Republic as the end-of-history.

He dies in 1831, and Marx picks up this thread, but famously thinks the historical dialectic is material, not ideal. The Master/Slave dialectic is transposed onto the relationship between Capital and the Proletariat. Capital exploits the Proletariat, but needs the Proletariat. This is what enables the Proletariat to rebel. Once the Proletariat rebel, says Marx, everybody will be on the same level and there will be world peace. I.e., communism is the material manifestation of a universalist morality. This is what Marx inherits from Kant.

But wait, you say. Kant and Hegel were both German Idealists. Where did Marx get this materialist innovation? It was probably his own genius head, you say.

Wrong! Because there’s a thread missing here.

Recall that it was David Hume, a Scotsman, whose provocative skeptical ideas roused Kant from his “dogmatic slumber”. (Historical question: Was it Hume who made Kant “woke” in his old age?) Hume was in the line of Anglophone empiricism, which was getting very bourgey after the Whigs and Locke and all that. Buddies with Hume is Adam Smith who was, let’s not forget, a moral philosopher.

So while Kant is getting very transcendental, Smith is realizing that in order to do any serious moral work you have to start looking at material reality, and so he starts Economics in England.

This next part I didn’t really realize the significance of until digging into it. Smith dies in 1790, just around when Kant is completing the moral project he’s famous for. At that time, the next major figure is 18, coming of age. It’s David Ricardo: a Sephardic Jew turned Unitarian, a Whig, a businessman who makes a fortune speculating on the Battle of Waterloo, who winds up buying a seat in Parliament because you could do that then, and also winds up doing a lot of the best foundational work on economics including inventing the labor theory of value. He was also, incidentally, an abolitionist.

Which means that to complete one’s understanding of Marx, you have to also be thinking:

Hume -> Smith -> Ricardo -> Marx

In other words, Marx is the unlikely marriage of German Idealism, with its continued commitment to universalist ethics, with British empiricism which is–and I keep having to bring this up–weak on ethics. Empiricism is a bad way of building an ethical theory and it’s why the U.S. has bad privacy laws. But it’s a good way to build up an economic materialist view of history. Hence all of Marx’s time looking at factories.

It’s worth noting that Ricardo was also the one who came up with the idea of Land Value Taxation (LVT), which later Henry George popularized as the Single Tax in the late 19th/early 20th century. So Ricardo really is the pivotal figure here in a lot of ways.

In future posts, I hope to be working out more of the background of economics and its connection to moral philosophy. In addition to trying to make the connections to my work on Econ-ARK, there’s also resonances coming up in the policy space. For example, the Law and Political Economy community has been rather explicitly trying to bring back “political economy”–in the sense of Smith, Ricardo, and Marx–into legal scholarship, with a particular aim at regulating the Internet. These threads are braiding together.

References

Allais, L. (2016). Kant’s racism. Philosophical papers45(1-2), 1-36.

Kleingeld, P. (2007). Kant’s second thoughts on race. The Philosophical Quarterly57(229), 573-592.