Digifesto

Tag: pragmatism

Notes on Omi and Winant, 2014, “Ethnicity”

I’m continuing to read Omi and Winant’s Racial Formation in the United States (2014). These are my notes on Chapter 1, “Ethnicity”.

There’s a long period during which the primary theory of race in the United States is a theological and/or “scientific” racism that maintains that different races are biologically different subspecies of humanity because some of them are the cursed descendants of some tribe mentioned in the Old Testament somewhere. In the 1800’s, there was a lot of pseudoscience involving skull measurements trying to back up a biblical literalism that rationalized, e.g., slavery. It was terrible.

Darwinism and improved statistical methods started changing all that, though these theological/”scientific” ideas about race were prominent in the United States until World War II. What took them out of the mainstream was the fact that the Nazis used biological racism to rationalize their evilness, and the U.S. fought them in a war. Jewish intellectuals in the United States in particular (and by now there were a lot of them) forcefully advocated for a different understanding of race based on ethnicity. This theory was dominant as a replacement for theories of scientific racism between WWII and the mid-60’s, when it lost its proponents on the left and morphed into a conservative ideology.

To understand why this happened, it’s important to point out how demographics were changing in the U.S. in the 20th century. The dominant group in the United States in the 1800’s were White Anglo-Saxon Protestants, or WASPs. Around 1870-1920, the U.S. started to get a lot more immigrants from Southern and Eastern Europe, as well as Ireland. These often economic refugees, though there were also people escaping religious persecution (Jews). Generally speaking these immigrants were not super welcome in the United States, but they came in at what may be thought of as a good time, as there was a lot of economic growth and opportunity for upward mobility in the coming century.

Partly because of this new wave of immigration, there was a lot of interest in different ethnic groups and whether or not they would assimilate in with the mainstream Anglo culture. American pragmatism, of the William James and Jown Dewey type, was an influential philosophical position in this whole scene. The early ethnicity theorists, who were part of the Chicago school of sociology that was pioneering grounded, qualitative sociological methods, were all pragmatists. Robert Park is a big figure here. All these guys apparently ripped off W.E.B. Du Bois, who was trained by William James and didn’t get enough credit because he was black.

Based on the observation of these European immigrants, the ethnicity theorists came to the conclusion that if you lower the structural barriers to participation in the economy, “ethnics” will assimilate to the mainstream culture (melt into the “melting pot”) and everything is fine. You can even tolerate some minor ethnic differences, resulting in the Italian-Americans, the Irish-Americans, and… the African-American. But that was a bigger leap for people.

What happened, as I’ve mentioned, is that scientific racism was discredited in the U.S. partly because it had to fight the Nazis and had so many Jewish intellectuals, who had been on the wrong end of scientific racism in Europe and who in the U.S. were eager to become “ethnics”. These became, in essence, the first “racial liberals”. At the time there was also a lot of displacement of African Americans who were migrating around the U.S. in search of economic opportunities. So in the post-war period ethnicity theorists optimistically proposed that race problems could be solved by treating all minority groups as if they were Southern and Eastern European immigrant groups. Reduce enough barriers and they would assimilate and/or exist in a comfortable equitable pluralism, they thought.

The radicalism of the Civil Rights movement broke the spell here, as racial minorities began to demand not just the kinds of liberties that European ethnics had taken advantage of, but also other changes to institutional racism and corrections to other racial injustices. The injustices persisted in part because racial differences are embodied differently than ethnic differences. This is an academic way of saying that the fact that (for example) black people often look different from white people matters for how society treats them. So treating race as a matter of voluntary cultural affiliation misses the point.

So ethnicity theory, which had been critical for dismantling scientific racism and opening the door for new policies on race, was ultimately rejected by the left. It was picked up by neoconservatives through their policies of “colorblindness”, which Omi and Winant describe in detail in the latter parts of their book.

There is a lot more detail in the chapter, which I found quite enlightening.

My main takeaways:

  • In today’s pitched media battles between “Enlightenment classical liberalism” and “postmodern identity politics”, we totally forget that a lot of American policy is based on American pragmatism, which is definitely neither an Enlightenment position nor postmodern. Everybody should shut up and read The Metaphysical Club.
  • There has been a social center, with views that are seen as center-left or center-right depending on the political winds, since WWII. The adoption of ethnicity theory into the center was a significant culture accomplishment with a specific history, however ultimately disappointing its legacy has been for anti-racist activists. Any resurgence of scientific racism is a definite backslide.
  • Omi and Winant are convincing about the limits of ethnicity theory in terms of: its dependence on economic “engines of mobility” that allow minorities to take part in economic growth, its failure to recognize the corporeal and ocular aspects of race, and its assumption that assimilation is going to be as appealing to minorities as it is to the white majority.
  • Their arguments about colorblind racism, which are at the end of their book, are going to be doing a lot of work and the value of the new edition of their book, for me at least, really depends on the strength of that theory.

Appearance, deed, and thing: meta-theory of the politics of technology

Flammarion engraving

Much is written today about the political and social consequences of technology. This writing often maintains that this inquiry into politics and society is distinct from the scientific understanding that informs the technology itself. This essay argues that this distinction is an error. Truly, there is only one science of technology and its politics.

Appearance, deed, and thing

There are worthwhile distinctions made between how our experience of the world feels to us directly (appearance), how we can best act strategically in the world (deed), and how the world is “in itself” or, in a sense, despite ourselves (individually) (thing).

Appearance

The world as we experience it has been given the name “phenomenon” (late Latin from Greek phainomenon ‘thing appearing to view’) and so “phenomenology” is the study of what we colloquially call today our “lived experience”. Some anthropological methods are a kind of social phenomenology, and some scholars will deny that there is anything beyond phenomenology. Those that claim to have a more effective strategy or truer picture of the world may have rhetorical power, powers that work on the lived experience of the more oppressed people because they have not been adequately debunked and shown to be situated, relativized. The solution to social and political problems, to these scholars, is more phenomenology.*

Deed

There are others that see things differently. A perhaps more normal attitude is that the outcomes of ones actions are more important that how the world feels. Things can feel one way now and another way tomorrow; does it much matter? If one holds some beliefs that don’t work when practically applied, one can correct oneself. The name for this philosophical attitude is pragmatism, (from Greek pragma, ‘deed’). There are many people, including some scholars, who find this approach entirely sufficient. The solution to social and political problems is more pragmatism. Sometimes this involves writing off impractical ideas and the people who hold them either useless or as mere pawns. It is their loss.

Thing

There are others that see things still differently. A perhaps diminishing portion of the population holds theories of how the world works that transcend both their own lived experience and individual practical applications. Scientific theories about the physical nature of the universe, though tested pragmatically and through the phenomena apparent to the scientists, are based in a higher claim about their value. As Bourdieu (2004) argues, the whole field of science depends on the accepted condition that scientists fairly contend for a “monopoly on the arbitration of the real”. Scientific theories are tested through contest, with a deliberate effort by all parties to prove their theory to be the greatest. These conditions of contest hold science to a more demanding standard than pragmatism, as results of applying a pragmatic attitude will depend on the local conditions of action. Scientific theories are, in principle, accountable to the real (from late Latin realis, from Latin res ‘thing’); these scientists may
be called ‘realists’ in general, though there are many flavors of realism as, appropriately, theories of what is real and how to discover reality have come and gone (see post-positivism and critical realism, for example).

Realists may or may not be concerned with social and political problems. Realists may ask: What is a social problem? What do solutions to these problems look like?

By this account, these three foci and their corresponding methodological approaches are not equivalent to each other. Phenomenology concerns itself with documenting the multiplicity of appearances. Pragmatism introduces something over and above this: a sorting or evaluation of appearances based on some goals or desired outcomes. Realism introduces something over and above pragmatism: an attempt at objectivity based on the contest of different theories across a wide range of goals. ‘Disinterested’ inquiry, or equivalently inquiry that is maximally inclusive of all interests, further refines the evaluation of which appearances are valid.

If this account sounds disparaging of phenomenology as merely a part of higher and more advanced forms of inquiry, that is truly how it is intended. However, it is equally notable that to live up to its own standard of disinterestedness, realism must include phenomenology fully within itself.

Nature and technology

It would be delightful if we could live forever in a world of appearances that takes the shape that we desire of it when we reason about it critically enough. But this is not how any but the luckiest live.

Rather, the world acts on us in ways that we do not anticipate. Things appear to us unbidden; they are born, and sometimes this is called ‘nature’ (from Latin natura ‘birth, nature, quality,’ from nat- ‘born’). The first snow of Winter comes as a surprise after a long warm Autumn. We did nothing to summon it, it was always there. For thousands of years humanity has worked to master nature through pragmatic deeds and realistic science. Now, very little of nature has been untouched by human hands. The stars are still things in themselves. Our planetary world is one we have made.

“Technology” (from Greek tekhnologia ‘systematic treatment,’ from tekhnē ‘art, craft’) is what we call those things that are made by skillful human deed. A glance out the window into a city, or at the device one uses to read this blog post, is all one needs to confirm that the world is full of technology. Sitting in the interior of an apartment now, literally everything in my field of vision except perhaps my own two hands and the potted plant are technological artifacts.

Science and technology studies: political appearances

According to one narrative, Winner (1980) famously asked the galling question “Do artifacts have politics?” and spawned a field of study** that questions the social consequences of technology. Science and Technology Studies (STS) is, purportedly, this field.
The insight this field claims as their own is that technology has social impact that is politically interesting, the specifics of this design determine these impacts, and that the social context of the design therefore influences the consequences of the technology. At its most ambitious, STS attempts to take the specifics of the technology out of the explanatory loop, showing instead how politics drives design and implementation to further political ends.

Anthropological methods are popular among STS scholars, who often commit themselves to revealing appearances that demonstrate the political origins and impacts of technology. The STS researcher might asked, rhetorically, “Did you know that this interactive console is designed and used for surveillance?”

We can nod sagely at these observations. Indeed, things appear to people in myriad ways, and critical analysis of those appearances does expose that there is a multiplicity of ways of looking at things. But what does one do with this picture?

The pragmatic turn back to realism

When one starts to ask the pragmatic question “What is to be done?”, one leaves the domain of mere appearances and begins to question the consequences of one’s deeds. This leads one to take actions and observe the unanticipated results. Suddenly, one is engaging in experimentation, and new kinds of knowledge are necessary. One needs to study organizational theory to understand the role of h technology within a firm, economics to understand how it interacts with the economy. One quickly leaves the field of study known as “science and technology studies” as soon as one begins to consider ones practical effects.

Worse (!), the pragmatist quickly discovers that discovering the impact of ones deeds requires an analysis of probabilities and the difficulty techniques of sampling data and correcting for bias. These techniques have been proven through the vigorous contest of the realists, and the pragmatist discovers that many tools–technologies–have been invented and provisioned for them to make it easier to use these robust strategies. The pragmatist begins to use, without understanding them, all the fruits of science. Their successes are alienated from their narrow lived experience, which are not enough to account for the miracles the= world–one others have invented for them–performs for them every day.

The pragmatist must draw the following conclusions. The world is full of technology, is constituted by it. The world is also full of politics. Indeed, the world is both politics and technology; politics is a technology; technology is form of politics. The world that must be mastered, for pragmatic purposes, is this politico-technical*** world.

What is technical about the world is that it is a world of things created through deed. These things manifest themselves in appearances in myriad and often unpredictable ways.

What is political about the world is that it is a contest of interests. To the most naive student, it may be a shock that technology is part of this contest of interests, but truly this is the most extreme naivete. What adolescent is not exposed to some form of arms race, whether it be in sports equipment, cosmetics, transportation, recreation, etc. What adult does not encounter the reality of technology’s role in their own business or home, and the choice of what to procure and use.

The pragmatist must be struck by the sheer obviousness of the observation that artifacts “have” politics, though they must also acknowledge that “things” are different from the deeds that create them and the appearances they create. There are, after all, many mistakes in design. The effects of technology may as often be due to incompetence as they are to political intent. And to determine the difference, one must contest the designer of the technology on their own terms, in the engineering discourse that has attempted to prove which qualities of a thing survive scrutiny across all interests. The pragmatist engaging the politico-technical world has to ask: “What is real?”

The real thing

“What is real?” This is the scientific question. It has been asked again and again for thousands of years for reasons not unlike those traced in this essay. The scientific struggle is the political struggle for mastery over our own politico-technical world, over the reality that is being constantly reinvented as things through human deeds.

There are no short cuts to answering this question. There are only many ways to cop out. These steps take one backward into striving for ones local interest or, further, into mere appearance, with its potential for indulgence and delusion. This is the darkness of ignorance. Forward, far ahead, is a horizon, an opening, a strange new light.

* This narrow view of the ‘privilege of subjectivity’ is perhaps a cause of recent confusion over free speech on college campuses. Realism, as proposed in this essay, is a possible alternative to that.

** It has been claimed that this field of study does not exist, much to the annoyance of those working within it.

*** I believe this term is no uglier than the now commonly used “sociotechnical”.

References

Bourdieu, Pierre. Science of science and reflexivity. Polity, 2004.

Winner, Langdon. “Do artifacts have politics?.” Daedalus (1980): 121-136.

algorithmic law and pragmatist legal theory: Oliver Wendell Holmes Jr. “The Path of the Law”

Several months ago I was taken by the idea that in the future (and maybe depending on how you think about it, already in the present) laws should be written as computer algorithms. While the idea that “code is law” and that technology regulates is by no means original, what I thought perhaps provocative is the positive case for the (re-)implementation of the fundamental laws of the city or state in software code.

The argument went roughly like this:

  • Effective law must control a complex society
  • Effective control requires social and political prediciton.
  • Unassisted humans are not good at social and political prediction. For this conclusion I drew heavily on Philip Tetlock’s work in Expert Political Judgment.
  • Therefore laws, in order to keep pace with the complexity of society, should be implemented as technical systems capable of bringing data and machine learning to bear on social control.

Science fiction is full of both dystopias and utopias in which society is literally controlled by a giant, intelligent machine. Avoiding either extreme, I just want to make the modest point that there may be scalability problems with law and regulation based on discourse in natural language. To some extent the failure of the state to provide sophisticated, personalized regulation in society has created myriad opportunities for businesses to fill these roles. Now there’s anxiety about the relationship between these businesses and the state as they compete for social regulation. To the extent that businesses are less legitimate rulers of society than the state, it seems a practical, technical necessity that the state adopt the same efficient technologies for regulation that businesses have. To do otherwise is to become obsolete.

There are lots of reasons to object to this position. I’m interested in hearing yours and hope you will comment on this and future blog posts or otherwise contact me with your considered thoughts on the matter. To me the strongest objection is that the whole point of the law is that it is based on precedent, and so any claim about the future trajectory of the law has to be based on past thinking about the law. Since I am not a lawyer and I know precious little about the law, you shouldn’t listen to my argument because I don’t know what I’m talking about. Q.E.D.

My counterargument to this is that there’s lots of academics who opine about things they don’t have particular expertise in. One way to get away with this is by deferring to somebody else who has credibility in field of interest. This is just one of several reasons why I’ve been reading “The Path of the Law“, a classic essay about pragmatist legal theory written by Supreme Court Justice Oliver Wendell Holmes Jr. in 1897.

One of the key points of this essay is that it is a mistake to consider the study of law the study of morality per se. Rather, the study of law is the attempt to predict the decisions that courts will make in the future, based on the decisions courts will make in the past. What courts actually decide is based in part of legal precedent but also on the unconsciously inclinations of judges and juries. In ambiguous cases, different legal framings of the same facts will be in competition, and the judgment will give weight to one interpretation or another. Perhaps the judge will attempt to reconcile these differences into a single, logically consistent code.

I’d like to take up the arguments of this essay again in later blog posts, but for now I want to focus on the concept of legal study as prediction. I think this demands focus because while Holmes, like most American pragmatists, had a thorough and nuanced understanding of what prediction is, our mathematical understanding of prediction has come a long way since 1897. Indeed, it is a direct consequence of these formalizations and implementations of predictive systems that we today see so much tacit social regulation performed by algorithms. We know now that effective prediction depends on access to data and the computational power to process it according to well-known algorithms. These algorithms can optimize themselves to such a degree that their specific operations are seemingly beyond the comprehension of the people affected by them. Some lawyers have argued that this complexity should not be allowed to exist.

What I am pointing to is a fundamental tension between the requirement that practitioners of the law be able to predict legal outcomes, and the fact that the logic of the most powerful predictive engines today is written in software code not words. This is because of physical properties of computation and prediction that are not likely to ever change. And since a powerful predictive engine can just as easily use its power to be strategically unpredictable, this presents an existential challenge to the law. It may simply be impossible for lawyers acting as human lawyers have for hundreds of years to effectively predict and therefor regulate powerful computational systems.

One could argue that this means that such powerful computational systems should simply be outlawed. Indeed this is the thrust of certain lawyers. But if we believe that these systems are not going to go away, perhaps because they won’t allow us to regulate them out of existence, then our only viable alternative to suffering under their lawless control is to develop a competing system of computational legalism with the legitimacy of the state.

Horkheimer, pragmatism, and cognitive ecology

In Eclipse of Reason, Horkheimer rips into the American pragmatists Peirce, James, and Dewey like nobody I’ve ever read. Normally seen as reasonable and benign, Horkheimer paints these figures as ignorant and undermining of the whole social order.

The reason is that he believes that they reduce epistemology to a kind a instrumentalism. But that’s selling their position a bit short. Dewey’s moral epistemology is pragmatist in that it is driven by particular, situated interests and concerns, but these are ingredients to moral inquiry and not conclusions in themselves.

So to the extent that Horkheimer is looking to dialectic reason as the grounds to uncovering objective truths, Dewey’s emphasis on the establishing institutions that allow for meaningful moral inquiry seems consistent with Horkheimer’s view. The difference is in whether the dialectics are transcendental (as for Kant) or immanent (as for Hegel?).

The tension around objectivity in epistemology that comes up in the present academic environment is that all claims to objectivity are necessarily situated and this situatedness is raised as a challenge to their objective status. If the claims or their justification depend on conditions that exclude some subjects (as they no doubt do; whether or not dialectical reason is transcendental or immanent is requires opportunities for reflection that are rare–privileged), can these conclusions be said to be true for all subjects?

The Friendly AI research program more or less assumes that yes, this is the case. Yudkowsky’s notion of Coherent Extrapolated Volition–the position arrived at by simulated, idealized reasoners, is a 21st century remake of Peirce’s limiting consensus of the rational. And yet the cry from standpoint theorists and certain anthropologically inspired disciplines is a recognition of the validity of partial perspectives. Haraway, for example, calls for an alliance of partial perspectives. Critical and adversarial design folks appear to have picked up this baton. Their vision is of a future of constantly vying (“agonistic”) partiality, with no perspective presuming to be settled, objective or complete.

If we make cognitivist assumptions about the computationality of all epistemic agents, then we are forced to acknowledge the finiteness of all actually existing reasoning. Finite capacity and situatedness become two sides of the same coin. Partiality, then, becomes a function of both ones place in the network (eccentricity vs. centrality) as well as capacity to integrate information from the periphery. Those locations in the network most able to valuably integrate information, whether they be Google’s data centers or the conversational hubs of research universities, are more impartial, more objective. But they can never be the complete system. Because of their finite capacity, their representations can at best be lossy compressions of the whole.

A Hegelian might dream of an objective truth obtainable by a single subject through transcendental dialectic. Perhaps this is unattainable. But if there’s any hope at all in this direction, it seems to me it must come from one of two possibilities:

  • The fortuitously fractal structure of the sociotechnical world such that an adequate representation of it can be maintained in its epistemic hubs through quining, or
  • A generative grammar or modeling language of cognitive ecology such that we can get insights into the larger interactive system from toy models, and apply these simplified models pragmatically in specific cases. For this to work and not suffer the same failures as theoretical economics, these models need to have empirical content. Something like Wolpert, Lee, and Bono’s Predictive Game Theory (for which I just discovered they’ve released a Python package…cool!) may be critical here.

several words, all in a row, some about numbers

I am getting increasingly bewildered by number of different paradigms available in academic research. Naively, I had thought I had a pretty good handle on this sort of thing coming into it. After trying to tackle the subject head on this semester, I feel like my head will explode.

I’m going to try to break down the options.

  • Nobody likes positivism, which went out of style when Wittgenstein refuted his own Tractatus.
  • Postpositivists say, “Sure, there isn’t really observer-independent inquiry, but we can still approximate that through rigorous methods.” The goal is an accurate description of the subject matter. I suppose this fits into a vision of science being about prediction and control of the environment, so generalizability of results would be considered important. I’d argue that this is also consistent with American pragmatism. I think “postpositivist” is a terrible name and would rather talk/think about pragmatism.
  • Interpretivism, which seems to be a more fashionable term than antipositivism, is associated with Weber and Frankfurt school thinkers, as well as a feminist critique. The goal is for one reader (or scholarly community?) to understand another. “Understanding” here is understood intersubjectively–“I get you”. Interpretivists are skeptical of prediction and control as provided by a causal understanding. At times, this skepticism is expressed as a belief that causal understanding (of people) is impossible; other times it is expressed as a belief that causal understanding is nefarious.

Both teams share a common intellectual ancestor in Immanuel Kant, who few people bother to read.

Habermas has room in his overarching theory for multiple kinds of inquiry–technical, intersubjective, and emancipatory/dramaturgical–but winds up getting mobilized by the interpretivists. I suspect this is the case because research aimed at prediction and control is better funded, because it is more instrumental to power. And if you’ve got funding there’s little incentive to look to Habermas for validation.

It’s worth noting that mathematicians still basically run their own game. You can’t beat pure reason at the research game. Much computer science research falls into this category. Pragmatists will take advantage of mathematical reasoning. I think interpretivists find mathematics a bit threatening because it seems like the only way to “interpet” mathematicians is by learning the math that they are talking about. When intersubjective understanding requires understanding verbatim, that suggests the subject matter is more objectively true than not.

The gradual expansion of computer science towards the social science through “big data” analysis can be seen as a gradual expansion of what can be considered under mathematical closure.

Physicists still want to mathematize their descriptions of the universe. Some psychologists want to mathematize their descriptions. Some political scientists, sociologists, etc. want to mathematize their descriptions. Anthropologists don’t want to mathematize their descriptions. Mathematization is at the heart of the quantitative/qualitative dispute.

It’s worth noting that there are non-mathematized predictive theories, as well as mathematized theories that pretty much fail to predict anything.