Digifesto

Category: sociology

Response to Abdurahman

Abdurahman has responded to my response to her tweet about my paper with Bruce Haynes, and invited me to write a rebuttal. While I’m happy to do so–arguing with intellectuals on the internet is probably one of my favorite things to do–it is not easy to rebut somebody with whom you have so little disagreement.

Abdurahman makes a number of points:

  1. Our paper, “Racial categories in machine learning”, omits the social context in which algorithms are enacted.
  2. The paper ignores whether computational thinking “acolytes like [me]” should be in the position of determining civic decisions.
  3. That the ontological contributions of African American Vernacular English (AAVE) are not present in the FAT* conference and that constitutes a hermeneutic injustice. (I may well have misstated this point).
  4. The positive reception to our paper may be due to its appeal to people with a disingenuous, lazy, or uncommitted racial politics.
  5. “Participatory design” does not capture Abdurahman’s challenge of “peer” design. She has a different and more broadly encompassing set of concerns: “whose language is used, whose viewpoint and values are privileged, whose agency is extended, and who has the right to frame the “problem”.”
  6. That our paper misses the point about predictive policing, from the perspective of people most affected by disparities in policing. Machine learning classification is not the right frame of the problem. The problem is an unjust prison system and, more broadly the unequal distribution of power that is manifested in the academic discourse itself. “[T]he problem is framed wrongly — it is not just that classification systems are inaccurate or biased, it is who has the power to classify, to determine the repercussions / policies associated thereof and their relation to historical and accumulated injustice?”

I have to say that I am not a stranger most of this line of thought and have great sympathy for the radical position expressed.

I will continue to defend our paper. Re: point 1, a major contribution of our paper was that it shed light on the political construction of race, especially race in the United States, which is absolutely part of “the social context in which algorithmic decision making is enacted”. Abdurahman must be referring to some other aspect of the social context. One problem we face as academic researchers is that the entire “social context” of algorithmic decision-making is the whole frickin’ world, and conference papers are about 12 pages or so. I thought we did a pretty good job of focusing on one, important and neglected aspect of that social context, the political formation of race, which as far as I know has never previously been addressed in a computer science paper. (I’ve written more about this point here).

Re: point 2, it’s true we omit a discussion of the relevance of computational thinking to civic decision-making. That is because this is a safe assumption to make in a publication to that venue. I happen to agree with that assumption, which is why I worked hard to submit a paper to that conference. If I didn’t think computational thinking was relevant, I probably would be doing something else with my time. That said, I think it’s wildly flattering and inaccurate to say that I, personally, have any control over “civic decision-making”. I really don’t, and I’m not sure why you’d think that, except for the erroneous myth that computer science research is, in itself, political power. It isn’t; that’s a lie that the tech companies have told the world.

I am quite aware (re: point 3) that my embodied and social “location” is quite different from Abdurahman’s. For example, unlike Abdurahman, it would be utterly pretentious for me to posture or “front” with AAVE. I simply have no access to its ontological wisdom, and could not be the conduit of it into any discourse, academic or informal. I have and use different resources; I am also limited by my positionality like anybody else. Sorry.

“Woke” white liberals potentially liking our argument? (Re: point 4) Fair. I don’t think that means our argument is bad or that the points aren’t worth making.

Re: point 5: I must be forgiven for not understanding the full depth of Abdurahman’s methodological commitments on the basis of a single tweet. There are a lot of different design methodologies and their boundaries are disputed. I see now that the label of “participatory design” is not sufficiently critical or radical enough to capture what she has in mind. I’m pleased to see she is working with Tap Parikh on this, who has a lot of experience with critical/radical HCI methods. I’m personally not an expert on any of this stuff. I do different work.

Re: point 6: My personal opinions about the criminal justice system did not make it into our paper, which again was a focused scientific article trying to make a different point. Our paper was about how racial categories are formed, how they are unfair, and how a computational system designed for fairness might address that problem. I agree that this approach is unlikely to have much meaningful impact on the injustices of the cradle-to-prison system in the United States, the prison-industrial complex, or the like. Based on what I’ve heard so far, the problems there would be best solved by changing the ways judges are trained. I don’t have any say in that, though–I don’t have a law degree.

In general, while I see Abdurahman’s frustrations as valid (of course!), I think it’s ironic and frustrating that she targets our paper as an emblem of the problems with the FAT* conference, with computer science, and with the world at large. First, our paper was not a “typical” FAT* paper; it was a very unusual one, positioned to broaden the scope of what’s discussed there, motivated in part by my own criticisms of the conference the year before. It was also just one paper: there’s tons of other good work at that conference, and the conversation is quite broad. I expect the best solution to the problem is to write and submit different papers. But it may also be that other venues are better for addressing the problems raised.

I’ll conclude that many of the difficulties and misunderstandings that underlie our conversation are a result of a disciplinary collapse that is happening because of academia’s relationship with social media. Language’s meaning depends on its social context, and social media is notoriously a place where contexts collapse. It is totally unreasonable to argue that everybody in the world should be focused on what you think is most important. In general, I think battles over “framing” on the Internet are stupid, and that the fact that these kinds of battles have become so politically prominent is a big part of why our society’s politics are so stupid. The current political emphasis on the symbolic sphere is a distraction from more consequential problems of economic and social structure.

As I’ve noted elsewhere, one reason why I think Haynes’s view of race is refreshing (as opposed to a lot of what passes for “critical race theory” in popular discussion) is that it locates the source of racial inequality in structure–spatial and social segregation–and institutional power–especially, the power of law. In my view, this politically substantive view of race is, if taken seriously, more radical than one based on mere “discourse” or “fairness” and demands a more thorough response. Codifying that response, in computational thinking, was the goal of our paper.

This is a more concrete and specific way of dealing with the power disparities that are at the heart of Abdurahman’s critique. Vague discourse and intimations about “privilege”, “agency”, and “power”, without an account of the specific mechanisms of that power, are weak.

Why STS is not the solution to “tech ethics”

“Tech ethics” are in (1) (2) (3) and a popular refrain at FAT* this year was that sensitivity to social and political context is the solution to the problems of unethical technology. How do we bring this sensitivity to technical design? Using the techniques of Science and Technology Studies (STS), argue variously Dobbe and Ames, as well as Selbst et al. (2019). Value Sensitive Design (VSD) (Friedman and Bainbridge, 2004) is one typical STS-branded technique for bringing this political awareness into the design process. In general, there is broad agreement that computer scientists should be working with social scientists when developing socially impactful technologies.

In this blog post, I argue that STS is not the solution to “tech ethics” that it tries to be.

Encouraging computer scientists to collaborate with social science domain experts is a great idea. My paper with Bruce Haynes (1) (2) (3) is an example of this kind of work. In it, we drew from sociology of race to inform a technical design that addressed the unfairness of racial categories. Significantly, in my view, we did not use STS in our work. Because the social injustices we were addressing were due to broad reaching social structures and politically constructed categories, we used sociology to elucidate what was at stake and what sorts of interventions would be a good idea.

It is important to recognize that there are many different social sciences dealing with “social and political context”, and that STS, despite its interdisciplinarity, is only one of them. This is easily missed in an interdisciplinary venue in which STS is active, because STS is somewhat activist in asserting its own importance in these venues. STS frequently positions itself as a reminder to blindered technologists that there is a social world out there. “Let me tell you about what you’re missing!” That’s it’s shtick. Because of this positioning, STS scholars frequently get a seat at the table with scientists and technologists. It’s a powerful position, in sense.

What STS scholars tend to ignore is how and when other forms of social scientists involve themselves in the process of technical design. For example, at FAT* this year there were two full tracks of Economic Models. Economic Models. Economics is a well-established social scientific discipline that has tools for understanding how a particular mechanism can have unintended effects when put into a social context. In economics, this is called “mechanism design”. It addresses what Selbst et al. might call the “Ripple Effect Trap”–the fact that a system in context may have effects that are different from the intention of designers. I’ve argued before that wiser economics are something we need to better address technology ethics, especially if we are talking about technology deployed by industry, which is most of it! But despite deep and systematic social scientific analysis of secondary and equilibrium effects at the conference, these peer-reviewed works are not acknowledged by STS interventionists. Why is that?

As usual, quantitative social scientists are completely ignored by STS-inspired critiques of technologists and their ethics. That is too bad, because at the scale at which these technologies are operating (mainly, we are discussing civic- or web-scale automated decision making systems that are inherently about large numbers of people), fuzzier debates about “values” and contextualized impact would surely benefit from quantitative operationalization.

The problem is that STS is, at its heart, a humanistic discipline, a subfield of anthropology. If and when STS does not deny the utility or truth or value of mathematization or quantification entirely, as a field of research it is methodologically skeptical about such things. In the self-conception of STS, this methodological relativism is part of its ethnographic rigor. This ethnographic relativism is more or less entirely incompatible with formal reasoning, which aspires to universal internal validity. At a moralistic level, it is this aspiration of universal internal validity that is so bedeviling to the STS scholar: the mathematics are inherently distinct from an awareness of the social context, because social context can only be understood in its ethnographic particularity.

This is a false dichotomy. There are other social sciences that address social and political context that do not have the same restrictive assumptions of STS. Some of these are quantitative, but not all of them are. There are qualitative sociologists and political scientists with great insights into social context that are not disciplinarily allergic to the standard practices of engineering. In many ways, these kinds of social sciences are far more compatible with the process of designing technology than STS! For example, the sociology we draw on in our “Racial categories in machine learning” paper is variously: Gramscian racial hegemony theory, structuralist sociology, Bourdieusian theories of social capital, and so on. Significantly, these theories are not based exclusively on ethnographic method. They are based on disciplines that happily mix historical and qualitative scholarship with quantitative research. The object of study is the social world, and part of the purpose of the research is to develop politically useful abstractions from it that generalize and can be measured. This is the form of social sciences that is compatible with quantitative policy evaluation, the sort of thing you would want to use if, for example, understanding the impact of an affirmative action policy.

Given the widely acknowledge truism that public sector technology design often encodes and enacts real policy changes (a point made in Deirdre Mulligan’s keynote), it would make sense to understand the effects of these technologies using the methodologies of policy impact evaluation. That would involve enlisting the kinds of social scientific expertise relevant to understand society at large!

But that is absolutely not what STS has to offer. STS is, at best, offering a humanistic evaluation of the social processes of technology design. The ontology of STS is flat, and its epistemology and ethics are immediate: the design decision comes down to a calculus of “values” of different “stakeholders”. Ironically, this is a picture of social context that often seems to neglect the political and economic context of that context. It is not an escape from empty abstraction. Rather, it insists on moving from clear abstractions to more nebulous ones, “values” like “fairness”, maintaining that if the conversation never ends and the design never gets formalized, ethics has been accomplished.

This has proven, again and again, to be a rhetorically effective position for research scholarship. It is quite popular among “ethics” researchers that are backed by corporate technology companies. That is quite possibly because the form of “ethics” that STS offers, for all of its calls for political sensitivity, is devoid of political substance. An apples-to-apples comparison of “values”, without considering the social origins of those values and the way those values are grounded in political interests that are not merely about “what we think is important in life”, but real contests over resource allocation. The observation by Ames et al. (2011) that people’s values with respect to technology varies with socio-economic class is terribly relevant, Bourdieusian lesson in how the standpoint of “values sensitivity” may, when taken seriously, run up against the hard realities of political agonism. I don’t believe STS researchers are truly naive about these points; however, in their rhetoric of design intervention, conducted in labs but isolated from the real conditions of technology firms, there is an idealism that can only survive under the self-imposed severity of STS’s own methodological restrictions.

Independent scholars can take up this position and publish daring pieces, winning the moral high ground. But that is not a serious position to take in an industrial setting, or when pursuing generalizable knowledge about the downstream impact of a design on a complex social system. Those empirical questions require different tools, albeit far more unwieldy ones. Complex survey instruments, skilled data analysis, and substantive social theory are needed to arrive at solid conclusions about the ethical impact of technology.

References

Ames, M. G., Go, J., Kaye, J. J., & Spasojevic, M. (2011, March). Understanding technology choices and values through social class. In Proceedings of the ACM 2011 conference on Computer supported cooperative work (pp. 55-64). ACM.

Friedman, B., & Bainbridge, W. S. (2004). Value sensitive design.

Selbst, A. D., Friedler, S., Venkatasubramanian, S., & Vertesi, J. (2018, August). Fairness and Abstraction in Sociotechnical Systems. In ACM Conference on Fairness, Accountability, and Transparency (FAT*).

All the problems with our paper, “Racial categories in machine learning”

Bruce Haynes and I were blown away by the reception to our paper, “Racial categories in machine learning“. This was a huge experiment in interdisciplinary collaboration for us. We are excited about the next steps in this line of research.

That includes engaging with criticism. One of our goals was to fuel a conversation in the research community about the operationalization of race. That isn’t a question that can be addressed by any one paper or team of researchers. So one thing we got out of the conference was great critical feedback on potential problems with the approach we proposed.

This post is an attempt to capture those critiques.

Need for participatory design

Khadijah Abdurahman, of Word to RI , issues a subtweeted challenge to us to present our paper to the hood. (RI stands for Roosevelt Island, in New York City, the location of the recently established Cornell Tech campus.)

One striking challenge, raised by Khadijah Abdurahman on Twitter, is that we should be developing peer relationships with the communities we research. I read this as a call for participatory design. It’s true this was not part of the process of the paper. In particular, Ms. Abdurahman points to a part of our abstract that uses jargon from computer science.

There are a lot of ways to respond to this comment. The first is to accept the challenge. I would personally love it if Bruce and I could present our research to folks on Roosevelt Island and get feedback from them.

There are other ways to respond that address the tensions of this comment. One is to point out that in addition to being an accomplished scholar of the sociology of race and how it forms, especially in urban settings, Bruce is a black man who is originally from Harlem. Indeed, Bruce’s family memoir shows his deep and well-researched familiarity with the life of marginalized people of the hood. So a “peer relationship” between an algorithm designer (me) and a member of an affected community (Bruce) is really part of the origin of our work.

Another is to point out that we did not research a particular community. Our paper was not human subjects research; it was about the racial categories that are maintained by the Federal U.S. government and which pervade society in a very general way. Indeed, everybody is affected by these categories. When I and others who looks like me are ascribed “white”, that is an example of these categories at work. Bruce and I were very aware of how different kinds of people at the conference responded to our work, and how it was an intervention in our own community, which is of course affected by these racial categories.

The last point is that computer science jargon is alienating to basically everybody who is not trained in computer science, whether they live in the hood or not. And the fact is we presented our work at a computer science venue. Personally, I’m in favor of universal education in computational statistics, but that is a tall order. If our work becomes successful, I could see it becoming part of, for example, a statistical demography curriculum that could be of popular interest. But this is early days.

The Quasi-Racial (QR) Categories are Not Interpretable

In our presentation, we introduced some terminology that did not make it into the paper. We named the vectors of segregation derived by our procedure “quasi-racial” (QR) vectors, to denote that we were trying to capture dimensions that were race-like, in that they captured the patterns of historic and ongoing racial injustice, without being the racial categories themselves, which we argued are inherently unfair categories of inequality.

First, we are not wedded to the name “quasi-racial” and are very open to different terminology if anybody has an idea for something better to call them.

More importantly, somebody pointed out that these QR vectors may not be interpretable. Given that the conference is not only about Fairness, but also Accountability and Transparency, this critique is certainly on point.

To be honest, I have not yet done the work of surveying the extensive literature on algorithm interpretability to get a nuanced response. I can give two informal responses. The first is that one assumption of our proposal is that there is something wrong with how race and racial categories are intuitive understood. Normal people’s understanding of race is, of course, ridden with stereotypes, implicit biases, false causal models, and so on. If we proposed an algorithm that was fully “interpretable” according to most people’s understanding of what race is, that algorithm would likely have racist or racially unequal outcomes. That’s precisely the problem that we are trying to get at with our work. In other words, when categories are inherently unfair, interpretability and fairness may be at odds.

The second response is that educating people about how the procedure works and why its motivated is part of what makes its outcomes interpretable. Teaching people about the history of racial categories, and how those categories are both the cause and effect of segregation in space and society, makes the algorithm interpretable. Teaching people about Principal Component Analysis, the algorithm we employ, is part of what makes the system interpretable. We are trying to drop knowledge; I don’t think we are offering any shortcuts.

Principal Component Analysis (PCA) may not be the right technique

An objection from the computer science end of the spectrum was that our proposed use of Principal Component Analysis (PCA) was not well-motivated enough. PCA is just one of many dimensionality reduction techniques–why did we choose it in particular? PCA has many assumptions about the input embedded within it, including the component vectors of interest are linear combinations of the inputs. What if the best QR representation is a non-linear combination of the input variables? And our use of unsupervised learning, as a general criticism, is perhaps lazy, since in order to validate its usefulness we will need to test it with labeled data anyway. We might be better off with a more carefully calibrated and better motivated alternative technique.

These are all fair criticisms. I am personally not satisfied with the technical component of the paper and presentation. I know the rigor of the analysis is not of the standard that would impress a machine learning scholar and can take full responsibility for that. I hope to do better in a future iteration of the work, and welcome any advice on how to do that from colleagues. I’d also be interested to see how more technically skilled computer scientists and formal modelers address the problem of unfair racial categories that we raised in the paper.

I see our main contribution as the raising of this problem of unfair categories, not our particular technical solution to it. As a potential solution, I hope that it’s better than nothing, a step in the right direction, and provocative. I subscribe to the belief that science is an iterative process and look forward to the next cycle of work.

Please feel free to reach out if you have a critique of our work that we’ve missed. We do appreciate all the feedback!

The secret to social forms has been in institutional economics all along?

A long-standing mystery for me has been about the ontology of social forms (1) (2): under what conditions is it right to call a particular assemblage of people a thing, and why? Most people don’t worry about this; in literatures I’m familiar with it’s easy to take a sociotechnical complex or assemblage, or a company, or whatever, as a basic unit of analysis.

A lot of the trickiness comes from thinking about this as a problem of identifying social structure (Sawyer, 200; Cederman, 2005). This implies that people are in some sense together and obeying shared norms, and raises questions about whether those norms exist in their own heads or not, and so on. So far I haven’t seen a lot that really nails it.

But what if the answer has been lurking in institutional economics all along? The “theory of the firm” is essentially a question of why a particular social form–the firm–exists as opposed to a bunch of disorganized transactions. The answers that have come up are quite good.

Take for example Holmstrom (1982), who argues that in a situation where collective outcomes depend on individual efforts, individuals will be tempted to free-ride. That makes it beneficial to have somebody monitor the activities of the other people and have their utility be tied to the net success of the organization. That person becomes the owner of the company, in a capitalist firm.

What’s nice about this example is that it explains social structure based on an efficiency argument; we would expect organizations shaped like this to be bigger and command more resources than others that are less well organized. And indeed, we have many enormous hierarchical organizations in the wild to observe!

Another theory of the firm is Williamson’s transaction cost economics (TCE) theory, which is largely about the make-or-buy decision. If the transaction between a business and its supplier has “asset specificity”, meaning that the asset being traded is specific to the two parties and their transaction, then any investment from either party will induce a kind of ‘lock-in’ or ‘switching cost’ or, in Williamson’s language, a ‘bilateral dependence’. The more of that dependence, the more a free market relationship between the two parties will expose them to opportunistic hazards. Hence, complex contracts, or in the extreme case outright ownership and internalization, tie the firms together.

I’d argue: bilateral dependence and the complex ‘contracts’ the connect entities are very much the stuff of “social forms”. Cooperation between people is valuable; the relation between people who cooperate is valuable as a consequence; and so both parties are ‘structurated’ (to mangle a Giddens term) individually into maintaining the reality of the relation!

References

Cederman, L.E., 2005. Computational models of social forms: Advancing generative process theory 1. American Journal of Sociology, 110(4), pp.864-893.

Holmstrom, Bengt. “Moral hazard in teams.” The Bell Journal of Economics (1982): 324-340.

Sawyer, R. Keith. “Simulating emergence and downward causation in small groups.” Multi-agent-based simulation. Springer Berlin Heidelberg, 2000. 49-67.

Williamson, Oliver E. “Transaction cost economics.” Handbook of new institutional economics. Springer, Berlin, Heidelberg, 2008. 41-65.

For fairness in machine learning, we need to consider the unfairness of racial categorization

Pre-prints of papers accepted to this coming 2019 Fairness, Accountability, and Transparency conference are floating around Twitter. From the looks of it, many of these papers add a wealth of historical and political context, which I feel is a big improvement.

A noteworthy paper, in this regard, is Hutchinson and Mitchell’s “50 Years of Test (Un)fairness: Lessons for Machine Learning”, which puts recent ‘fairness in machine learning’ work in the context of very analogous debates from the 60’s and 70’s that concerned the use of testing that could be biased due to cultural factors.

I like this paper a lot, in part because it is very thorough and in part because it tees up a line of argument that’s dear to me. Hutchinson and Mitchell raise the question of how to properly think about fairness in machine learning when the protected categories invoked by nondiscrimination law are themselves social constructs.

Some work on practically assessing fairness in ML has tackled the problem of using race as a construct. This echoes concerns in the testing literature that stem back to at least 1966: “one stumbles immediately over the scientific difficulty of establishing clear yardsticks by which people can be classified into convenient racial categories” [30]. Recent approaches have used Fitzpatrick skin type or unsupervised clustering to avoid racial categorizations [7, 55]. We note that the testing literature of the 1960s and 1970s frequently uses the phrase “cultural fairness” when referring to parity between blacks and whites.

They conclude that this is one of the areas where there can be a lot more useful work:

This short review of historical connections in fairness suggest several concrete steps forward for future research in ML fairness: Diving more deeply into the question of how subgroups are defined, suggested as early as 1966 [30], including questioning whether subgroups should be treated as discrete categories at all, and how intersectionality can be modeled. This might include, for example, how to quantify fairness along one dimension (e.g., age) conditioned on another dimension (e.g., skin tone), as recent work has begun to address [27, 39].

This is all very cool to read, because this is precisely the topic that Bruce Haynes and I address in our FAT* paper, “Racial categories in machine learning” (arXiv link). The problem we confront in this paper is that the racial categories we are used to using in the United States (White, Black, Asian) originate in the white supremacy that was enshrined into the Constitution when it was formed and perpetuated since then through the legal system (with some countervailing activity during the Civil Rights Movement, for example). This puts “fair machine learning” researchers in a bind: either they can use these categories, which have always been about perpetuating social inequality, or they can ignore the categories and reproduce the patterns of social inequality that prevail in fact because of the history of race.

In the paper, we propose a third option. First, rather than reify racial categories, we propose breaking race down into the kinds of personal features that get inscribed with racial meaning. Phenotype properties like skin type and ocular folds are one such set of features. Another set are events that indicate position in social class, such as being arrested or receiving welfare. Another set are facts about the national and geographic origin of ones ancestors. These facts about a person are clearly relevant to how racial distinctions are made, but are themselves more granular and multidimensional than race.

The next step is to detect race-like categories by looking at who is segregated from each other. We propose an unsupervised machine learning technique that works with the distribution of the phenotype, class, and ancestry features across spatial tracts (as in when considering where people physically live) or across a social network (as in when considering people’s professional networks, for example). Principal component analysis can identify what race-like dimensions capture the greatest amounts of spatial and social separation. We hypothesize that these dimensions will encode the ways racial categorization has shaped the social structure in tangible ways; these effects may include both politically recognized forms of discrimination as well as forms of discrimination that have not yet been surfaced. These dimensions can then be used to classify people in race-like ways as input to fairness interventions in machine learning.

A key part of our proposal is that race-like classification depends on the empirical distribution of persons in physical and social space, and so are not fixed. This operationalizes the way that race is socially and politically constructed without reifying the categories in terms that reproduce their white supremacist origins.

I’m quite stoked about this research, though obviously it raises a lot of serious challenges in terms of validation.

On “Racialization” (Omi and Winant, 2014)

Notes on Omi and Winant, 2014, Chapter 4, Section: “Racialization”.

Summary

Race is often seen as either an objective category, or an illusory one.

Viewed objectively, it is seen as a biological property, tied to phenotypic markers and possibly other genetic traits. It is viewed as an ‘essence’.
Omi and Winant argue that the concept of ‘mixed-race’ depends on this kind of essentialism, as it implies a kind of blending of essences. This is the view associated with “scientific” racism, most prevalent in the prewar era.

View as an illusion, race is seen as an ideological construct. An epiphenomenon of culture, class, or peoplehood. Formed as a kind of “false consciousness”, in the Marxist terminology. This view is associated with certain critics of affirmative action who argue that any racial classification is inherently racist.

Omi and Winant are critical of both perspectives, and argue for an understanding of race as socially real and grounded non-reducibly in phenomic markers but ultimately significant because of the social conflicts and interests constructed around those markers.

They define race as: “a concept that signifies and symbolizes signifiers and symbolizes social conflicts and interests by referring to different types of human bodies.”

The visual aspect of race is irreducible, and becomes significant when, for example, is becomes “understood as a manifestation of more profound differences that are situated within racially identified persons: intelligence, athletic ability, temperament, and sexuality, among other traits.” These “understandings”, which it must be said may be fallacious, “become the basis to justify or reinforce social differentiation.

This process of adding social significance to phenomic markers is, in O&W’s language, racialization, which they define as “the extension of racial meanings to a previously racially unclassified relationship, social practice, or group.” They argue that racialization happens at both macro and micro scales, ranging from the consolidation of the world-system through colonialization to incidents of racial profiling.

Race, then, is a concept that refer to different kinds of bodies by phenotype and the meanings and social practices ascribed to them. When racial concepts are circulated and accepted as ‘social reality’, racial difference is not dependent on visual difference alone, but take on a life of their own.

Omi and Winant therefore take a nuanced view of what it means for a category to be socially constructed, and it is a view that has concrete political implications. They consider the question, raised frequently, as to whether “we” can “get past” race, or go beyond it somehow. (Recall that this edition of the book was written during the Obama administration and is largely a critique of the idea, which seems silly now, that his election made the United States “post-racial”).

Omi and Winant see this framing as unrealistically utopian and based on extreme view that race is “illusory”. It poses race as a problem, a misconception of the past. A more effective position, they claim, would note that race is an element of social structure, not an irregularity in it. “We” cannot naively “get past it”, but also “we” do not need to accept the erroneous conclusion that race is a fixed biological given.

Comments

Omi and Winant’s argument here is mainly one about the ontology of social forms.
In my view, this question of social form ontology is one of the “hard problems”
remaining in philosophy, perhaps equivalent to if not more difficult than the hard problem of consciousness. So no wonder it is such a fraught issue.

The two poles of thinking about race that they present initially, the essentialist view and the epiphenomenal view, had their heyday in particular historical intellectual movements. Proponents of these positions are still popularly active today, though perhaps it’s fair to say that both extremes are now marginalized out of the intellectual mainstream. Despite nobody really understanding how social construction works, most educated people are probably willing to accept that race is socially constructed in one way or another.

It is striking, then, that Omi and Winant’s view of the mechanism of racialization, which involves the reading of ‘deeper meanings’ into phenomic traits, is essentially a throwback to the objective, essentializing viewpoint.
Perhaps there is a kind of cognitive bias, maybe representativeness bias or fundamental attribution bias, which is responsible for the cognitive errors that make racialization possible and persistent.

If so, then the social construction of race would be due as much to the limits of human cognition as to the circulation of concepts. That would explain the temptation to believe that we can ‘get past’ race, because we can always believe in the potential for a society in which people are smarter and are trained out of their basic biases. But Omi and Winant would argue that this is utopian. Perhaps the wisdom of sociology and social science in general is the conservative recognition of the widespread implications of human limitation. As the social expert, one can take the privileged position that notes that social structure is the result of pervasive cognitive error. That pervasive cognitive error is perhaps a more powerful force than the forces developing and propagating social expertise. Whether it is or is not may be the existential question for liberal democracy.

An unanswered question at this point is whether, if race were broadly understood as a function of social structure, it remains as forceful a structuring element as if it is understood as biological essentialism. It is certainly possible that, if understood as socially contingent, the structural power of race will steadily erode through such statistical processes as regression to the mean. In terms of physics, we can ask whether the current state of the human race(s) is at equilibrium, or heading towards an equilibrium, or diverging in a chaotic and path-dependent way. In any of these cases, there is possibly a role to be played by technical infrastructure. In other words, there are many very substantive and difficult social scientific questions at the root of the question of whether and how technical infrastructure plays a role in the social reproduction of race.

“The Theory of Racial Formation”: notes, part 1 (Cha. 4, Omi and Winant, 2014)

Chapter 4 of Omi and Winant (2014) is “The Theory of Racial Formation”. It is where they lay out their theory of race and its formation, synthesizing and improving on theories of race as ethnicity, race as class, and race as nation that they consider earlier in the book.

This rhetorical strategy of presenting the historical development of multiple threads of prior theory before synthesizing them into something new is familiar to me from my work with Helen Nissenbaum on Contextual Integrity. CI is a theory of privacy that advances prior legal and social theories by teasing out their tensions. This seems to be a good way to advance theory through scholarship. It is interesting that the same method of theory building can work in multiple fields. My sense is that what’s going on is that there is an underlying logic to this process which in a less Anglophone world we might call “dialectical”. But I digress.

I have not finished Chapter 4 yet but I wanted to sketch out the outline of it before going into detail. That’s because what Omi and Winant are presenting a way of understanding the mechanisms behind the reproduction of race that are not simplistically “systemic” but rather break it down into discrete operations. This is a helpful contribution; even if the theory is not entirely accurate, its very specificity elevates the discourse.

So, in brief notes:

For Omi and Winant, race is a way of “making up people”; they attribute this phrase to Ian Hacking but do not develop Hacking’s definition. Their reference to a philosopher of science does situate them in a scholarly sense; it is nice that they seem to acknowledge an implicit hierarchy of theory that places philosophy at the foundation. This is correct.

Race-making is a form of othering, of having a group of people identify another group as outsiders. Othering is a basic and perhaps unavoidable human psychological function; their reference for this is powell and Menendian (Apparently, john a. powell being one of these people like danah boyd who decapitalizes their name.)

Race is of course a social construct that is neither a fixed and coherent category nor something that is “unreal”. That is, presumably, why we need a whole book on the dynamic mechanisms that form it. One reason why race is such a dynamic concept is because (a) it is a way of organizing inequality in society, (b) the people on “top” of the hierarchy implied by racial categories enforce/reproduce that category “downwards”, (c) the people on the “bottom” of the hierarchy implied by racial categories also enforce/reproduce a variation of those categories “upwards” as a form of resistance, and so (d) the state of the racial categories at any particular time is a temporary consequence of conflicting “elite” and “street” variations of it.

This presumes that race is fundamentally about inequality. Omi and Winant believe it is. In fact, they think racial categories are a template for all other social categories that are about inequality. This is what they mean by their claim that race is a master category. It’s “a frame used for organizing all manner of political thought”, particularly political thought about liberation struggles.

I’m not convinced by this point. They develop it with a long discussion of intersectionality that is also unconvincing to me. Historically, they point out that sometimes women’s movements have allied with black power movements, and sometimes they haven’t. They want the reader to think this is interesting; as a data scientist, I see randomness and lack of correlation. They make the poignant and true point that “perhaps at the core of intersectionality practice, as well as theory, is the ‘mixed race’ category. Well, how does it come about that people can be ‘mixed’?” They then drop the point with no further discussion.

[Edit: While Omi and Winant do address the issue of what it means to be ‘mixed race’ in more depth later in the book, their treatment of intersectionality remains for me difficult. Race is a system of political categorization; however, racial categories are hereditary in a way that sexual categories are not. That is an important difference in how the categories are formed and maintained, one that is glossed over in O&W’s treatment of the subject, as well as in popular discourse.]

Omi and Winant make an intriguing comment, “In legal theory, the sexual contract and racial contract have often been compared”. I don’t know what this is about but I want to know more.

This is all a kind of preamble to their presentation of theory. They start to provide some definitions:

racial formation
The sociohistorical process by which racial identities are created, lived out, transformed, and destroyed.
racialization
How phenomic-corporeal dimensions of bodies acquire meaning in social life.
racial projects
The co-constitutive ways that racial meanings are translated into social structures and become racially signified.
racism
Not defined. A property of racial projects that Omi and Winant will discuss later.
racial politics
Ways that the politics (of a state?) can handle race, including racial despotism, racial democracy, and racial hegemony.

This is a useful breakdown. More detail in the next post.

Race as Nation (on Omi and Winant, 2014)

Today the people I have personally interacted with are: a Russian immigrant, three black men, a Japanese-American woman, and a Jewish woman. I live in New York City and this is a typical day. But when I sign onto Twitter, I am flooded with messages suggesting that the United States is engaged in a political war over its racial destiny. I would gladly ignore these messages if I could, but there appears to be somebody with a lot of influence setting a media agenda on this.

So at last I got to Omi and Winant‘s chapter on “Nation” — on theories of race as nation. The few colleagues who expressed interest in these summaries of Omi and Winant were concerned that they would not tackle the relationship between race and colonialism; indeed they do tackle it in this chapter, though it comes perhaps surprisingly late in their analysis. Coming to this chapter, I had high hopes that these authors, whose scholarship has been very helpfully thorough on other aspects of race, would shed light on the connection between nation and race that would help shed light on the present political situation in the U.S. I have to say that I wound up being disappointed in their analysis, but that those disappointments were enlightening. Since this edition of their book was written in 2014 when their biggest target was “colorblindness”, the gaps in their analysis are telling precisely because they show how educated, informed imagination could not foresee today’s resurgence of white nationalism in the United States.

Having said that, Omi and Winant are not naive about white nationalism. On the contrary, they open their chapter with a long section on The White Nation, which is a phrase I can’t even type without cringing at. They paint a picture in broad strokes: yes, the United States has for most of its history explicitly been a nation of white people. This racial identity underwrote slavery, the conquest of land from Native Americans, and policies of immigration and naturalization and segregation. For much of its history, for most of its people, the national project of the United States was a racial project. So say Omi and Winant.

Then they also say (in 2014) that this sense of the nation as a white nation is breaking down. Much of their chapter is a treatment of “national insurgencies”, which have included such a wide variety of movements as Pan-Africanism, cultural insurgencies that promote ‘ethnic’ culture within the United States, and Communism. (They also make passing reference to feminism as comparable kind of national insurgency undermining the notion that the United States is a white male nation. While the suggestion is interesting, they do not develop it enough to be convincing, and instead the inclusion of gender into their history of racial nationalism comes off as a perfunctory nod to their progressive allies.)

Indeed, they open this chapter in a way that is quite uncharacteristic for them. They write in a completely different register: not historical and scholarly analysis, and but more overtly ideology-mythology. They pose the question (originally posed by du Bois) in personal and philosophical terms to the reader: whose nation is it? Is it yours? They do this quite brazenly, in a way the denies one the critical intervention of questioning what a nation really is, of dissecting it as an imaginary social form. It is troubling because it seems to be subtle abuse of the otherwise meticulously scholarly character of their work. They set of the question of national identity as a pitched battle over a binary, much as is being done today. It is troublingly done.

This Manichean painting of American destiny is perhaps excused because of the detail with which they have already discussed ethnicity and class at this point in the book. And it does set up their rather prodigious account of Pan-Africanism. But it puts them in the position of appearing to accept uncritically an intuitive notion of what a nation is even while pointing out how this intuitive idea gets challenged. Indeed, they only furnish one definition of a nation, and it is Joseph Stalin’s, from a 1908 pamphlet:

A nation is a historically constituted, stable community of people, formed on the basis of a common language, territory, economic life, and psychological make-up, manifested in a common culture. (Stalin, 1908)

So much for that.

Regarding colonialism, Omi and Winant are surprisingly active in their rejection of ‘colonialist’ explanations of race in the U.S. beyond the historical conditions. They write respectfully of Wallerstein’s world-system theory as contributing to a global understanding of race, but do not see it as illuminating the specific dynamics of race in the United States very much. Specifically, they bring up Bob Blauner’s Racial Oppression in America as a paradigmatic of the application of internal colonialism theory to the United States, then pick it apart and reject it. According to internal colonialism (roughly):

  • There’s a geography of spatial arrangement of population groups along racial line
  • There is a dynamic of cultural domination and resistance, organized on lines of racial antagonism
  • Theirs systems of exploitation and control organized along racial lines

Blauner took up internal colonialism theory explicitly in 1972 to contribute to ‘radical nationalist’ practice of the 60’s, admitting that it is more inspired by activists than sociologists. So we might suspect, with Omi and Winant, that his discussion of colonialism is more about crafting an exciting ideology than one that is descriptively accurate. For example, Blauner makes a distinction between “colonized and immigrant minorities”, where the “colonized” minorities are those whose participation in the United States project was forced (Africans and Latin Americans) while those (Europeans) who came voluntarily are “immigrants” and therefore qualitatively different. Omi and Winant take issue with this classification, as many European immigrants were themselves refugees of ethnic cleansing, while it leaves the status of Asian Americans very unclear. At best, ‘internal colonialism’ theory, as far as the U.S. is concerned, places emphasis on known history but does not add to it.

Omi and Winant frequently ascribe theorists of race agency in racial politics, as if the theories enable self-conceptions that enable movements. This may be professional self-aggrandizement. They also perhaps set up nationalist accounts of race weakly because they want to deliver the goods in their own theory of racial formation that appears in the next chapter. They see nation based theories as capturing something important:

In our view, the nation-based paradigm of race is an important component of our understanding of race: in highlighting “peoplehood,” collective identity, it “invents tradition” (Hobsbawm and Ranger, eds. 1983) and “imagines community” (Anderson, 1998). Nation-based understandings of race provide affective identification: They promise a sense of ineffable connection within racially identified groups; they engage in “collective representation” (Durkheim 2014). The tropes of “soul,” of “folk,” of hermanos/hermanas unidos/unidas uphold Duboisian themes. They channel Marti’s hemispheric consciousness (Marti 1977 [1899]); and Vasconcelo’s ideas of la raza cosmica (1979, Stavans 2011). In communities and movements, in the arts and popular media, as well as universities and colleges (especially in ethnic studies) these frameworks of peoplehood play a vital part in maintaining a sense of racial solidarity, however uneven or partial.

Now, I don’t know most of the references in the above quotation. But one gets the sense that Omi and Winant believe strongly that race contains an affective identifciation component. This may be what they were appealing to in a performative or demonstrative way earlier in the chapter. While they must be on to something, it is strange that they have this as the main takeaway of the history of race and nationalism. It is especially unconvincing that their conclusion after studying the history of racial nationalism is that ethnic studies departments in universities are what racial solidarity is really about, because under their own account the creation of ethnic studies departments was an accomplishment of racial political organization, not the precursor to it.

Omi and Winant deal in only the most summary terms with the ways in which nationalism is part of the operation of a nation state. They see racial nationalism as a factor in slavery and colonialism, and also in Jim Crow segregation, but deal only loosely with whether and how the state benefited from this kind of nationalism. In other words, they have a theory of racial nationalism that is weak on political economy. Their only mention of integration in military service, for example, is the mention that service in the American Civil War was how many Irish Americans “became white”. Compare this with Fred Turner‘s account of how increased racial liberalization was part of the United States strategy to mobilize its own army against fascism.

In my view, Omi and Winant’s blind spot is their affective investment in their view of the United States as embroiled in perpetual racial conflict. While justified and largely information, it prevents them from seeing a wide range of different centrist views as anything but an extension of white nationalism. For example, they see white nationalism in nationalist celebrations of ‘the triumph of democracy’ on a Western model. There is of course a lot of truth in this, but also, as is abundantly clear today when now there appears to be a conflict between those who celebrate a multicultural democracy with civil liberties and those who prefer overt racial authoritarianism, there is something else going on that Omi and Winant miss.

My suspicion is this: in their haste to target “colorblind neoliberalism” as an extension of racism-as-usual, they have missed how in the past forty years or so, and especially in the past eight, such neoliberalism has itself been a national project. Nancy Fraser can argue that progressive neoliberalism has been hegemonic and rejected by right-wing populists. A brief look at the center left media will show how progressivism is at least as much of an affective identity in the United States as is whiteness, despite the fact that progressivism is not in and of itself a racial construct or “peoplehood”. Omi and Winant believed that colorblind neoliberalism would be supported by white nationalists because it was neoliberal. But now it has been rejected by white nationalist because it is colorblind. This is a difference that makes a difference.

Omi and Winant on economic theories of race

Speaking of economics and race, Chapter 2 of Omi and Winant (2014), titled “Class”, is about economic theories of race. These are my notes on it

Throughout this chapter, Omi and Winant seem preoccupied with whether and to what extent economic theories of race fall on the left, center, or right within the political spectrum. This is despite their admission that there is no absolute connection between the variety of theories and political orientation, only general tendencies. One presumes when reading it that they are allowing the reader to find themselves within that political alignment and filter their analysis accordingly. I will as much as possible leave out these cues, because my intention in writing these blog posts is to encourage the reader to make an independent, informed judgment based on the complexity the theories reveal, as opposed to just finding ideological cannon fodder. I claim this idealistic stance as my privilege as an obscure blogger with no real intention of ever being read.

Omi and Winant devote this chapter to theories of race that attempt to more or less reduce the phenomenon of race to economic phenomena. They outline three varieties of class paradigms for race:

  • Market relations theories. These tend to presuppose some kind theory of market efficiency as an ideal.
  • Stratification theories. These are vaguely Weberian, based on classes as ‘systems of distribution’.
  • Product/labor based theories. These are Marxist theories about conflicts over social relations of production.

For market relations theories, markets are efficient, racial discrimination and inequality isn’t, and so the theory’s explicandum is what market problems are leading to the continuation of racial inequalities and discrimination. There are a few theories on the table:

  • Irrational prejudice. This theory says that people are racially prejudiced for some stubborn reason and so “limited and judicious state interventionism” is on the table. This was the theory of Chicago economist Gary Becker, who is not to be confused with the Chicago sociologist Howard Becker, whose intellectual contributions were totally different. Racial prejudice unnecessarily drives up labor costs and so eventually the smart money will become unprejudiced.
  • Monopolistic practices. The idea here is that society is structured in the interest of whites, who monopolize certain institutions and can collect rents from their control of resources. Jobs, union membership, favorably located housing, etc. are all tied up in this concept of race. Extra-market activity like violence is used to maintain these monopolies. This theory, Omi and Winant point out, is sympatico with white privilege theories, as well as nation-based analyses of race (cf. colonialism).
  • Disruptive state practices. This view sees class/race inequality as the result of state action of some kind. There’s a laissez-faire critique which argues that minimum wage and other labor laws, as well as affirmative action, entrench race and prevent the market from evening things out. Doing so would benefit both capital owners and people of color according to this theory. There’s a parallel neo-Marxist theory that says something similar, interestingly enough.

It must be noted that in the history of the United States, especially before the Civil Rights era, there absolutely was race-based state intervention on a massive scale and this was absolutely part of the social construction of race. So there hasn’t been a lot of time to test out the theory that market equilibrium without racialized state policies results in racial equality.

Omi and Winant begin to explicate their critique of “colorblind” theories in this chapter. They characterize “colorblind” theories as individualistic in principle, and opposed to the idea of “equality of result.” This is the familiar disparate treatment vs. disparate impact dichotomy from the interpretation of nondiscrimination law. I’m now concerned that this, which appears to be the crux of the problem of addressing contests over racial equality between the center and the left, will not be resolved even after O&W’s explication of it.

Stratification theory is about the distribution of resources, though understood in a broader sense than in a narrow market-based theory. Resources include social network ties, elite recruitment, and social mobility. This is the kind of theory of race an symbolic interactionist sociologist of class can get behind. Or a political scientist’s: the relationship between the elites and the masses, as well as the dynamics of authority systems, are all part of this theory, according to Omi and Winant. One gets the sense that of the class based theories, this nuanced and nonreductivist one is favored by the authors … except for the fascinating critique that these theories will position race vs. class as two dimensions of inequality, reifying them in their analysis, whereas “In experiential terms, of course, inequality is not differentiated by race or class.”

The phenomenon that there is a measurable difference in “life chances” between races in the United States is explored by two theorists to which O&W give ample credit: William J Wilson and Douglas Massey.

Wilson’s major work in 1978, The Declining Significance of Race, tells a long story of race after the Civil War and urbanization that sounds basically correct to me. It culminates with the observation that there are now elite and middle-class black people in the United States due to the uneven topology of reforms but that ‘the massive black “underclass” was relegated to permanent marginality’. He argued that race was no longer a significant linkage between these two classes, though Omi and Winant criticize this view, arguing that there is fragility to the middle-class status for blacks because of public sector job losses. His view that class divides have superseded racial divides is his most controversial claim and so perhaps what he is known best for. He advocated for a transracial alliance within the Democratic party to contest the ‘racial reaction’ to Civil Rights, which at this point was well underway with Nixon’s “southern strategy”. The political cleavages along lines of partisan racial alliance are familiar to us in the United States today. Perhaps little has changed.
He called for state policies to counteract class cleavages, such as day care services to low-income single mothers. These calls “went nowhere” because Democrats were unwilling to face Republican arguments against “giveaways” to “welfare queens”. Despite this, Omi and Winant believe that Wilson’s views converge with neoconservative views because he doesn’t favor public sector jobs as a solution to racial inequality; more recently, he’s become a “culture of poverty” theorist (because globalization reduces the need for black labor in the U.S.) and believes in race neutral policies to overcome urban poverty. The relationship between poverty and race is incidental to Wilson, which I suppose makes him ‘colorblind” in O&W’s analysis.

Massey’s work, which is also significantly reviewed in this chapter, deals with immigration and Latin@s. There’s a lot there, so I’ll cut to the critique of his recent book, Categorically Unequal (2008), in which Massey unites his theories of anti-black and anti-brown racism into a comprehensive theory of racial stratification based on ingrained, intrinsic, biological processes of prejudice. Naturally, to Omi and Winant, the view that there’s something biological going on is “problematic”. They (being quite mainstream, really) see this as tied to the implicit bias literature but think that there’s a big difference from implicit bias due to socialization vs. over permanent hindbrain perversity. This is apparently taken up again in their Chapter 4.

Omi and Winant’s final comment is that these stratification theories deny agency and can’t explain how “egalitarian or social justice-oriented transformations could ever occur, in the past, present, or future.” Which is, I suppose, bleak to the anti-racist activists Omi and Winant are implicitly aligned with. Which does raise the possibility that what O&W are really up to in advocating a hard line on the looser social construction of race is to keep the hope of possibility of egalitarian transformation alive. It had not occurred to me until just now that their sensitivity to the idea that implicit bias may be socially trained vs. being a more basic and inescapable part of psychology, a sensitivity which is mirrored elsewhere in society, is due to this concern for the possibility and hope for equality.

The last set of economic theories considered in this chapter are class-conflict theories, which are rooted in a Marxist conception of history as reducible to labor-production relations and therefore class conflict. There are two different kinds of Marxist theory of race. There are labor market segmentation theories, led by Michael Reich, a labor economist at Berkeley. According to this research, when the working class unifies across racial lines, it increases its bargaining power and so can get better wages in its negotiations with capital. So the capitalist in this theory may want to encourage racial political divisions even if they harbor no racial prejudices themselves. “Workers of the world unite!” is the message of these theories. An alternative view is split labor market theory, which argues that under economic pressure the white working class would rather throw other races under the bus than compete with them economically. Political mobilization for a racially homogenous, higher paid working class is then contested by both capitalists and lower paid minority workers.

Reflections

Omi and Winant respect the contributions of these theories but think that trying to reduce race to economic relations ultimately fails. This is especially true for the market theorists, who always wind up introducing race as an non-economic, exogenous variable to avoid inequalities in the market.

The stratification theories are perhaps more realistic and complex.

I’m most surprised at how the class-conflict based theories are reflected in what for me are the major lenses into the zeitgeist of contemporary U.S. politics. This may be because I’m very disproportionately surrounded by Marxist-influenced intellectuals. But it is hard to miss the narrative that the white working class has rejected the alliance between neoliberal capital and low-wage immigrant and minority labor. Indeed, it is arguably this latter alliance that Nancy Fraser has called neoliberalism. This conflict accords with the split labor market theory. Fraser and other hopeful socialist types argue that a triumph over identity differences is necessary to realize racial conflicts in the working class play into the hands of capitalists, not white workers. It is very odd that this ideological question is not more settled empirically. It may be that the whole framing is perniciously oversimplified, and that really you have to talk about things in a more nuanced way to get real headway.

Unless of course there isn’t any such real hope. This was an interesting part of the stratification theory: the explanation that included an absence of agency. I used to study lots and lots of philosophy, and in philosophy it’s a permissible form of argument to say, “This line of reasoning, if followed to its conclusion, leads to an appalling and untenable conclusion, one that could never be philosophically satisfying. For that reason, we reject it and consider a premise to be false.” In other words, in philosophy you are allowed to be motivated by the fact that a philosophical stance is life negating or self-defeating in some way. I wonder if that is true of sociology of race. I also wonder whether bleak conclusions are necessary even if you deny the agency of racial minorities in the United States to liberate themselves on their own steam. Now there’s globalization, and earlier patterns of race may well be altered by forces outside of it. This is another theme in contemporary political discourse.

Once again Omi and Winant have raised the specter of “colorblind” policies without directly critiquing them. The question seems to boil down to whether or not the mechanisms that reproduce racial inequality can be mitigated better by removing those mechanisms that are explicitly racial or not. If part of the mechanism is irrational prejudice due to some hindbrain tick, then there may be grounds for a systematic correction of that tick. But that would require a scientific conclusion about the psychology of race that identifies a systematic error. If the error is rather interpreting an empirical inequality due to racialized policies as an essentialized difference, then that can be partially corrected by reducing the empirical inequality in fact.

It is in fact because I’m interested in what kinds of algorithms would be beneficial interventions in the process of racial formation that I’m reading Omi and Winant so closely in the first place.

bodies and liberal publics in the 20th century and today

I finally figured something out, philosophically, that has escaped me for a long time. I feel a little ashamed that it’s taken me so long to get there, since it’s something I’ve been told in one way or another many times before.

Here is the set up: liberalism is justified by universal equivalence between people. This is based in the Enlightenment idea that all people have something in common that makes them part of the same moral order. Recognizing this commonality is an accomplishment of reason and education. Whether this shows up in Habermasian discourse ethics, according to which people may not reason about politics from their personal individual situation, or in the Rawlsian ‘veil of ignorance’, in which moral precepts are intuitively defended under the presumption that one does not know who or where one will be, liberal ideals always require that people leave something out, something that is particular to them. What gets left out is people’s bodies–meaning both their physical characteristics and more broadly their place in lived history. Liberalism was in many ways a challenge to a moral order explicitly based on the body, one that took ancestry and heredity very seriously. So much a part of aristocratic regime was about birthright and, literally, “good breeding”. The bourgeois class, relatively self-made, used liberalism to level the moral playing field with the aristocrats.

The Enlightenment was followed by a period of severe theological and scientific racism that was obsessed with establishing differences between people based on their bodies. Institutions that were internally based on liberalism could then subjugate others, by creating an Other that was outside the moral order. Equivalently, sexism too.
Social Darwinism was a threat to liberalism because it threatened to bring back a much older notion of aristocracy. In WWII, the Nazis rallied behind such an ideology and were defeated in the West by a liberal alliance, which then established the liberal international order.

I’ve got to leave out the Cold War and Communism here for a minute, sorry.

Late modern challenges to the liberal ethos gained prominence in activist circles and the American academy during and following the Civil Rights Movement. These were and continue to be challenges because they were trying to bring bodies back into the conversation. The problem is that a rules-based order that is premised on the erasure of differences in bodies is going to be unable to deal with the political tensions that precisely do come from those bodily differences. Because the moral order of the rules was blind to those differences, the rules did not govern them. For many people, that’s an inadequate circumstance.

So here’s where things get murky for me. In recent years, you have had a tension between the liberal center and the progressive left. The progressive left reasserts the political importance of the body (“Black Lives Matter”), and assertions of liberal commonality (“All Lives Matter”) are first “pushed” to the right, but then bump into white supremacy, which is also a reassertion of the political importance of the body, on the far right. It’s worth mention Piketty, here, I think, because to some extent that also exposed how under liberal regimes the body has secretly been the organizing principle of wealth through the inheritance of private property.

So what has been undone is the sense, necessary for liberalism, that there is something that everybody has in common which is the basis for moral order. Now everybody is talking about their bodily differences.

That is on the one hand good because people do have bodily differences and those differences are definitely important. But it is bad because if everybody is questioning the moral order it’s hard to say that there really is one. We have today, I submit, a political nihilism crisis due to our inability to philosophically imagine a moral order that accounts for bodily difference.

This is about the Internet too!

Under liberalism, you had an idea that a public was a place people could come to agree on the rules. Some people thought that the Internet would become a gigantic public where everybody could get together and discuss the rules. Instead what happened was that the Internet became a place where everybody could discuss each other’s bodies. People with similar bodies could form counterpublics and realize their shared interests as body-classes. (This piece by David Weinberger critiquing the idea of an ‘echo chamber’ is inspiring.) Within these body-based counterpublics each form their own internal moral order whose purpose is to mobilize their body-interests against other kinds of bodies. I’m talking about both black lives matter and white supremacists here, radical feminists and MRA’s. They are all buffeting liberalism with their body interests.

I can’t say whether this is “good” or “bad” because the moral order is in flux. There is apparently no such thing as neutrality in a world of pervasive body agonism. That may be its finest criticism: body agonism is politically unstable. Body agonism leads to body anarchy.

I’ll conclude with two points. The first is that the Enlightenment view of people having something in common (their personhood, their rationality, etc.) which put them in the same moral order was an intellectual and institutional accomplishment. People do not naturally get outside themselves and put themselves in other people’s shoes; they have to be educated to do it. Perhaps there is a kernal of truth here about what moral education is that transcends liberal education. We have to ask whether today’s body agonism is an enlightened state relative to moral liberalism because it acknowledges a previously hidden descriptive reality of body difference and is no longer so naive, or if body agonism is a kind of ethical regress because it undoes moral education, reducing us to a more selfish state of nature, of body conflict, albeit in a world full of institutions based on something else entirely.

The second point is that there is an alternative to liberal order which appears to be alive and well in many places. This is an order that is not based on individual attitudes for legitimacy, but rather is more about the endurance of institutions for their own sake. I’m referring of course to authoritarianism. Without the pretense of individual equality, authoritarian regimes can focus on maintaining power on their own terms. Authoritarian regimes do not need to govern through moral order. U.S. foreign policy used to be based on the idea that such amoral governance would be shunned. But if body agonism has replaced the U.S. international moral order, we no longer have an ideology to export or enforce abroad.