Category: economics

Is competition good for cybersecurity?

A question that keeps coming up in various forms, but for example in response to recent events around the ‘trade war’ between the U.S. and China and its impact on technology companies, is whether or not market competition is good or bad for cyber-security.

Here is a simple argument for why competition could be good for cyber-security: The security of technical products is a positive quality of them, something that consumers would like. Market competition is what gets producers to make higher quality products at lower cost. Therefore, competition is good for security.

Here is an argument for why competition could be bad for cyber-security: Security is a hard thing for any consumer to understand; since most won’t, we have an information asymmetry here and therefore a ‘market for lemons’ kind of market failure. Therefore, competition is bad for security. It would be better to have a well-regulated monopoly.

This argument echoes, though it doesn’t exactly parallel, some of the arguments in Pasquale’s work on Hamiltonian’s and Jeffersonian’s in technology platform regulation.


“the privatization of public functions”

An emerging theme from the conference on Trade Secrets and Algorithmic Systems was that legal scholars have become concerned about the privatization of public functions. For example, the use of proprietary risk assessment tools instead of the discretion of judges who are supposed to be publicly accountable is a problem. More generally, use of “trade secrecy” in court settings to prevent inquiry into software systems is bogus and moves more societal control into the realm of private ordering.

Many remedies were proposed. Most involved some kind of disclosure and audit to experts. The most extreme form of disclosure is making the software and, where it’s a matter of public record, training data publicly available.

It is striking to me to be encountering the call for government use of open source systems because…this is not a new issue. The conversation about federal use of open source software was alive and well over five years ago. Then, the arguments were about vendor lock-in; now, they are about accountability of AI. But the essential problem of whether core governing logic should be available to public scrutiny, and the effects of its privatization, have been the same.

If we are concerned with the reliability of a closed and large-scale decision-making process of any kind, we are dealing with problems of credibility, opacity, and complexity. The prospects of an efficient market for these kinds of systems are dim. These market conditions are the conditions of sustainability of open source infrastructure. Failures in sustainability are manifest as software vulnerabilities, which are one of the key reasons why governments are warned against OSS now, though the process of measurement and evaluation of OSS software vulnerability versus proprietary vulnerabilities is methodologically highly fraught.

The paradox of ‘data markets’

We often hear that companies are “selling out data”, or that we are “paying for services” with our data. Data brokers literally buy and sell data about people. There are other forms of expensive data sources or data sets. There is, undoubtedly, one or more data markets.

We know that classically, perfect competition in markets depends on perfect information. Buyers and sellers on the market need to have equal and instantaneous access to information about utility curves and prices in order for the market to price things efficiently.

Since the bread and butter of the data market is information asymmetry, we know that data markets can never be perfectly competitive. If it was, the data market would cease to exist, because the perfect information condition would entail that there is nothing to buy and sell.

Data markets therefore have to be imperfectly competitive. But since these are the markets that perfect information in other markets might depend on, this imperfection is viral. The vicissitudes of the data market are the vicissitudes of the economy in general.

The upshot is that the challenges of information economics are not only those that appear in special sectors like insurance markets. They are at the heart of all economic activity, and there are no equilibrium guarantees.

The Crevasse: a meditation on accountability of firms in the face of opacity as the complexity of scale

To recap:

(A1) Beneath corporate secrecy and user technical illiteracy, a fundamental source of opacity in “algorithms” and “machine learning” is the complexity of scale, especially scale of data inputs. (Burrell, 2016)

(A2) The opacity of the operation of companies using consumer data makes those consumers unable to engage with them as informed market actors. The consequence has been a “free fall” of market failure (Strandburg, 2013).

(A3) Ironically, this “free” fall has been “free” (zero price) for consumers; they appear to get something for nothing without knowing what has been given up or changed as a consequence (Hoofnagle and Whittington, 2013).


(B1) The above line of argument conflates “algorithms”, “machine learning”, “data”, and “tech companies”, as is common in the broad discourse. That this conflation is possible speaks to the ignorance of the scholarly position on these topics, and ignorance that is implied by corporate secrecy, technical illiteracy, and complexity of scale simultaneously. We can, if we choose, distinguish between these factors analytically. But because, from the standpoint of the discourse, the internals are unknown, the general indication of a ‘black box’ organization is intuitively compelling.

(B1a) Giving in to the lazy conflation is an error because it prevents informed and effective praxis. If we do not distinguish between a corporate entity and its multiple internal human departments and technical subsystems, then we may confuse ourselves into thinking that a fair and interpretable algorithm can give us a fair and interpretable tech company. Nothing about the former guarantees the latter because tech companies operate in a larger operational field.

(B2) The opacity as the complexity of scale, a property of the functioning of machine learning algorithms, is also a property of the functioning of sociotechnical organizations more broadly. Universities, for example, are often opaque to themselves, because of their own internal complexity and scale. This is because the mathematics governing opacity as a function of complexity and scale are the same in both technical and sociotechnical systems (Benthall, 2016).

(B3) If we discuss the complexity of firms, as opposed the the complexity of algorithms, we should conclude that firms that are complex due to scale of operations and data inputs (including number of customers) will be opaque and therefore have strategic advantage in the market against less complex market actors (consumers) with stiffer bounds on rationality.

(B4) In other words, big, complex, data rich firms will be smarter than individual consumers and outmaneuver them in the market. That’s not just “tech companies”. It’s part of the MO of every firm to do this. Corporate entities are “artificial general intelligences” and they compete in a complex ecosystem in which consumers are a small and vulnerable part.


(C1) Another source of opacity in data is that the meaning of data come from the causal context that generates it. (Benthall, 2018)

(C2) Learning causal structure from observational data is hard, both in terms of being data-intensive and being computationally complex (NP). (c.f. Friedman et al., 1998)

(C3) Internal complexity, for a firm, is not sufficient to be “all-knowing” about the data that is coming it; the firm has epistemic challenges of secrecy, illiteracy, and scale with respect to external complexity.

(C4) This is why many applications of machine learning are overrated and so many “AI” products kind of suck.

(C5) There is, in fact, an epistemic crevasse between all autonomous entities, each containing its own complexity and constituting a larger ecological field that is the external/being/environment for any other autonomy.

To do:

The most promising direction based on this analysis is a deeper read into transaction cost economics as a ‘theory of the firm’. This is where the formalization of the idea that what the Internet changed most are search costs (a kind of transaction cost) should be.

It would be nice if those insights could be expressed in the mathematics of “AI”.

There’s still a deep idea in here that I haven’t yet found the articulation for, something to do with autopoeisis.


Benthall, Sebastian. (2016) The Human is the Data Science. Workshop on Developing a Research Agenda for Human-Centered Data Science. Computer Supported Cooperative Work 2016. (link)

Sebastian Benthall. Context, Causality, and Information Flow: Implications for Privacy Engineering, Security, and Data Economics. Ph.D. dissertation. Advisors: John Chuang and Deirdre Mulligan. University of California, Berkeley. 2018.

Burrell, Jenna. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms.” Big Data & Society 3.1 (2016): 2053951715622512.

Friedman, Nir, Kevin Murphy, and Stuart Russell. “Learning the structure of dynamic probabilistic networks.” Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc., 1998.

Hoofnagle, Chris Jay, and Jan Whittington. “Free: accounting for the costs of the internet’s most popular price.” UCLA L. Rev. 61 (2013): 606.

Strandburg, Katherine J. “Free fall: The online market’s consumer preference disconnect.” U. Chi. Legal F. (2013): 95.

the resilience of agonistic control centers of global trade

This post is merely notes; I’m fairly confident that I don’t know what I’m writing about. However, I want to learn more. Please recommend anything that could fill me in about this! I owe most of this to discussion with a colleague who I’m not sure would like to be acknowledged.

Following the logic of James Beniger, an increasingly integrated global economy requires more points of information integration and control.

Bourgeois (in the sense of ‘capitalist’) legal institutions exist precisely for the purpose of arbitrating between merchants.

Hence, on the one hand we would expect international trade law to be Habermasian. However, international trade need not rest on a foundation of German idealism (which increasingly strikes me as the core of European law). Rather, it is an evolved mechanism.

A key part of this mechanism, as I’ve heard, is that it is decentered. Multiple countries compete to be the sites of transnational arbitration, much like multiple nations compete to be tax havens. Sovereignty and discretion are factors of production in the economy of control.

This means, effectively, that one cannot defeat capitalism by chopping off its head. It is rather much more like a hydra: the “heads” are the creation of two-sided markets. These heads have no internalized sense of the public good. Rather, they are optimized to be attractive to the transnational corporations in bilateral negotiation. The plaintiffs and defendants in these cases are corporations and states–social forms and institutions of complexity far beyond that of any individual person. This is where, so to speak, the AI’s clash.

For a more ethical Silicon Valley, we need a wiser economics of data

Kara Swisher’s NYT op-ed about the dubious ethics of Silicon Valley and Nitasha Tiku’s WIRED article reviewing books with alternative (and perhaps more cynical than otherwise stated) stories about the rise of Silicon Valley has generated discussion and buzz among the tech commentariat.

One point of debate is whether the focus should be on “ethics” or on something more substantively defined, such as human rights. Another point is whether the emphasis should be on “ethics” or on something more substantively enforced, like laws which impose penalties between 1% and 4% of profits, referring of course to the GDPR.

While I’m sympathetic to the European approach (laws enforcing human rights with real teeth), I think there is something naive about it. We have not yet seen whether it’s ever really possible to comply with the GDPR could wind up being a kind of heavy tax on Big Tech companies operating in the EU, but one that doesn’t truly wind up changing how people’s data are used. In any case, the broad principles of European privacy are based on individual human dignity, and so they do not take into account the ways that corporations are social structures, i.e. sociotechnical organizations that transcend individual people. The European regulations address the problem of individual privacy while leaving mystified the question of why the current corporate organization of the world’s personal information is what it is. This sets up the fight over ‘technology ethics’ to be a political conflict between different kinds of actors whose positions are defined as much by their social habitus as by their intellectual reasons.

My own (unpopular!) view is that the solution to our problems of technology ethics are going to have to rely on a better adapted technology economics. We often forget today that economics was originally a branch of moral philosophy. Adam Smith wrote The Theory of Moral Sentiments (1759) before An Inquiry into the Nature and Causes of the Wealth of Nations (1776). Since then the main purpose of economics has been to intellectually grasp the major changes to society due to production, trade, markets, and so on in order to better steer policy and business strategy towards more fruitful equilibria. The discipline has a bad reputation among many “critical” scholars due to its role in supporting neoliberal ideology and policies, but it must be noted that this ideology and policy work is not entirely cynical; it was a successful centrist hegemony for some time. Now that it is under threat, partly due to the successes of the big tech companies that benefited under its regime, it’s worth considering what new lessons we have to learn to steer the economy in an improved direction.

The difference between an economic approach to the problems of the tech economy and either an ‘ethics’ or a ‘law’ based approach is that it inherently acknowledges that there are a wide variety of strategic actors co-creating social outcomes. Individual “ethics” will not be able to settle the outcomes of the economy because the outcomes depend on collective and uncoordinated actions. A fundamentally decent person may still do harm to others due to their own bounded rationality; “the road to hell is paved with good intentions”. Meanwhile, regulatory law is not the same as command; it is at best a way of setting the rules of a game that will be played, faithfully or not, by many others. Putting regulations in place without a good sense of how the game will play out differently because of them is just as irresponsible as implementing a sweeping business practice without thinking through the results, if not more so because the relationship between the state and citizens is coercive, not voluntary as the relationship between businesses and customers is.

Perhaps the biggest obstacle to shifting the debate about technology ethics to one about technology economics is that it requires a change in register. It drains the conversation of the pathos which is so instrumental in surfacing it as an important political topic. Sound analysis often ruins parties like this. Nevertheless, it must be done if we are to progress towards a more just solution to the crises technology gives us today.

How trade protection can increase labor wages (the Stolper-Samuelson theorem)

I’m continuing a look into trade policy 8/08/30/trade-policy-and-income-distribution-effects/”>using Corden’s (1997) book on the topic.

Picking up where the last post left off, I’m operating on the assumption that any reader is familiar with the arguments for free trade that are an extension of those arguments of laissez-faire markets. I will assume that these arguments are true as far as they go: that the economy grows with free trade, that tariffs create a dead weight loss, that subsidies are expensive, but that both tariffs and subsidies do shift the market towards imports.

The question raised by Corden is why, despite its deleterious effects on the economy as a whole, protectionism enjoys political support by some sectors of the economy. He hints, earlier in Chapter 5, that this may be due to income distribution effects. He clarifies this with reference to an answer to this question that was given as early as 1941 by Stolper and Samuelson; their result is now celebrated as the Stolper-Samuelson theorem.

The mathematics of the theorem can be read in many places. Like any economic model, it depends on some assumptions that may or may not be the case. Its main advantage is that it articulates how it is possible for protectionism to benefit a class of the population, and not just in relative but in absolute terms. It does this by modeling the returns to different factors of production, which classically have been labor, land, and capital.

Roughly, the argument goes like this. Suppose and economy has two commodities, one for import and one for export. Suppose that the imported good is produced with a higher labor to land ratio than the export good. Suppose a protectionist policy increases the amount of the import good produced relative to the export good. Then the return on labor will increase (because more labor is used in supply), and the return on land will decrease (because less land is used in supply). Wages will increase and rent on land will decrease.

These breakdowns of the economy into “factors of production” feels very old school. You rarely read economists discuss the economy in these terms now, which is itself interesting. One reason why (and I am only speculating here) is that these models clarify how laborers, land-owners, and capital-owners have different political interests in economic intervention, and that can lead to the kind of thinking that was flushed out of the American academy during the McCarthy era. Another reason may be that “capital” has changed meaning from being about ownership of machine goods into being about having liquid funds available for financial investment.

I’m interested in these kinds of models today partly because I’m interested in the political interests in various policies, and also because I’m interested in particular in the economics of supply chain logistics. The “factors of production” approach is a crude way to model the ‘supply chain’ in a broad sense, but one that has proven to be an effective source of insights in the past.


Corden, W. Max. “Trade policy and economic welfare.” OUP Catalogue (1997).

Stolper, Wolfgang F., and Paul A. Samuelson. “Protection and real wages.” The Review of Economic Studies 9.1 (1941): 58-73.

trade policy and income distribution effects

And now for something completely different

I am going to start researching trade policy, meaning policies around trade between different countries; imports and exports. Why?

  • It is politically relevant in the U.S. today.
  • It is a key component to national cybersecurity strategy, both defensive and offensive, which hinges in many cases on supply chain issues.
  • It maybe ought to be a component of national tech regulation and privacy policy, if e-commerce is seen as a trade activity. (This could be see as ‘cybersecurity’ policy, more broadly writ).
  • Formal models from trade policy may be informative in other domains as well.

In general, years of life experience and study have taught me that economics, however much it is maligned, is a wise and fundamental social science without which any other understanding of politics and society is incomplete, especially when considering the role of technology in society.

Plenty of good reasons! Onward!

As a starting point, I’m working through Max Corden’s Trade policy and social welfare (1997), which appears to be a well regarded text on the subject. In it, he sets out to describe a normative theory of trade policy. Here are two notable points based on a first perusal.

1. (from Chapter 1, “Introduction”) Corden identifies three “stages of thought” about trade policy. The first is the discovery of the benefits of free trade with the great original economists Adam Smith and David Ricardo. Here, the new appreciation of free trade was simultaneous with the new appreciation of the free market in general. “Indeed, the case for free trade was really a special case of the argument for laissez-faire.”

In the second phase, laissez-faire policies came into question. These policies may not lead to full employment, and the income distribution effects (which Corden takes seriously throughout the book, by the way) may not be desirable. Parallel to this, the argument for free trade was challenged. Some of these challenges were endorsed by John Stuart Mill. One argument is that tariffs might be necessary to protect “infant industries”.

As time went on, the favorability of free trade more or less tracked the favorability of laissez-faire. Both were popular in Western Europe and failed to get traction in most other countries (almost all of which were ‘developing’).

Corden traces the third stage of thought to Meade’s (1955) Trade and welfare. “In the third stage the link between the case for free trade and the case for laissez-faire was broken.“. The normative case for free trade, in this stage, did not depend on a normative case for laissez-faire, but existed despite normative reasons for government intervention in the economy. The point made in this approach, called the theory of domestic distortions, is that it is generally better for the kinds of government intervention made to solve domestic problems to be domestic interventions, not trade interventions.

This third stage came with a much more sophisticated toolkit for comparing the effects of different kinds of policies, which is the subject of exposition for a large part of Corden’s book.

2. (from Chapter 5, “Protection and Income Distribution) Corden devotes at least one whole chapter to an aspect of the trade policy discussion that is very rarely addressed in, say, the mainstream business press. This is the fact that trade policy can have an effect on internal income distribution, and that this has been throughout history a major source of the political momentum for protectionist policies. This explains why the domestic politics of protectionism and free trade can be so heated and are really often independent from arguments about the effect of trade policy on the economy as a whole, which, it must be said, few people realize they have a real stake in.

Corden’s examples involve the creation of fledgling industries under the conditions of war, which often cut off foreign supplies. When the war ends, those businesses that flourished during war exert political pressure to protect themselves from erosion from market forces. “Thus the Napoleonic Wars cut off supplies of corn (wheat) to Britain from the Continent and led to expansion of acreage and higher prices of corn. When the war was over, the Corn Law of 1815 was designed to maintain prices, with an import prohibition as long as the domestic price was below a certain level.” It goes almost without saying that this served the interests of a section of the community, the domestic corn farmers, and not of others. This is what Corden means by an “income distribution effect”.

“Any history book will show that these income distribution effects are the very stuff of politics. The great free trade versus protection controversies of the nineteenth century in Great Britain and in the United States brought out the conflicting interests of different sections of the community. It was the debate about the effects of the Corn Laws which really stimulated the beginnings of the modern theory of international trade.”

Extending this argument a bit, one might say that a major reason why economics gets such a bad rap as a social science is that nobody really cares about Pareto optimality except for those sections of the economy that are well served by a policy that can be justified as being Pareto optimal (in practice, this would seem to be correlated with how much somebody has invested in mutual funds, as these track economic growth). The “stuff of politics” is people using political institutions to change their income outcomes, and the potential for this makes trade policy a very divisive topic.

Implication for future research:

The two key takeaways for trade policy in cybersecurity are:

1) The trade policy discussion need not remain within the narrow frame of free trade versus protectionism, but rather a more nuanced set of policy analysis tools should be brought to bear on the problem, and

2) An outcome of these policy analyses should be the identification not just of total effects on the economy, or security posture, or what have you, but on the particular effects on different sections of the economy and population.


Corden, W. Max. “Trade policy and economic welfare.” OUP Catalogue (1997).

Meade, James Edward. Trade and welfare. Vol. 2. Oxford University Press, 1955.

How the Internet changed everything: a grand theory of AI, etc.

I have read many a think piece and critical take about AI, the Internet, and so on. I offer a new theory of What Happened, the best I can come up with based on my research and observations to date.

Consider this article, “The death of Don Draper”, as a story that represents the changes that occur more broadly. In this story, advertising was once a creative field that any company with capital could hire out to increase their chances of getting noticed and purchased, albeit in a noisy way. Because everything was very uncertain, those that could afford it blew a lot of money on it (“Half of advertising is useless; the problem is knowing which half”).

A similar story could be told about access to the news–dominated by big budgets that hid quality–and political candidates–whose activities were largely not exposed to scrutiny and could follow a similarly noisy pattern of hype and success.

Then along came the Internet and targeted advertising, which did a number of things:

  • It reduced search costs for people looking for particular products, because Google searches the web and Amazon indexes all the products (and because of lots of smaller versions of Google and Amazon).
  • It reduced the uncertainty of advertising effectiveness because it allowed for fine-grained measurement of conversion metrics. This reduced the search costs of producers to advertisers, and from advertisers to audiences.
  • It reduced the search costs of people finding alternative media and political interest groups, leading to a reorganization of culture. The media and cultural landscape could more precisely reflect the exogenous factors of social difference.
  • It reduced the cost of finding people based on their wealth, social influence, and so on, implicitly creating a kind of ‘social credit system’ distributed across various web services. (Gandy, 1993; Fourcade and Healy, 2016)

What happens when you reduce search costs in markets? Robert Jensen’s (2007) study of the introduction of mobile phones to fish markets in Kerala is illustrative here. Fish prices were very noisy due to bad communication until mobile phones were introduced. After that, the prices stabilized, owing to swifter communication between fisherman and markets. Suddenly able to preempt prices rather than subject to the vagaries to them, fisherman could then choose to go to the market that would give them the best price.

Reducing search costs makes markets more efficient and larger. In doing so, it increases inequality, because whereas a lot of lower quality goods and services can survive in a noisy economy, when consumers are more informed and more efficient at searching, they can cut out less useful services. They can then standardize on “the best” option available, which can be produced with economies of scale. So inefficient, noisy parts of the economy were squeezed out and the surplus amassed in the hands of a big few intermediaries, who we now see as Big Tech leveraging AI.

Is AI an appropriate term? I have always liked this definition of AI: “Anything that humans still do better than computers.” Most recently I’ve seen this restated in an interview with Andrew Moore, quoted by Zachary Lipton:

Artificial intelligence is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.

The use of technical platforms to dramatically reduce search costs. “Searching” for people, products, and information is something that used to require human intelligence. Now it is assisted by computers. And whether or not the average user knows that they are doing when they search (Mulligan and Griffin, 2018), as a commercial function, the panoply of search engines and recommendation systems and auctions that occupy the central places in the information economy outperform human intelligence largely by virtue of having access to more data–a broader perspective–than any individual human could ever accomplish.

The comparison between the Google search engine and a human’s intelligence is therefore ill-posed. The kinds of functions tech platforms are performing are things that have only every been solved by human organizations, especially bureaucratic ones. And while the digital user interfaces of these services hides the people “inside” the machines, we know that of course there’s an enormous amount of ongoing human labor involved in the creation and maintenance of any successful “AI” that’s in production.

In conclusion, the Internet changed everything for a mundane reason that could have been predicted from neoclassical economic theory. It reduced search costs, creating economic efficiency and inequality, by allowing for new kinds of organizations based on broad digital connectivity. “AI” is a distraction from these accomplishments, as is most “critical” reaction to these developments, which do not do justice to the facts of the matter because by taking up a humanistic lens, they tend not to address how decisions by individual humans and changes to their experience experience are due to large-scale aggregate processes and strategic behaviors by businesses.


Gandy Jr, Oscar H. The Panoptic Sort: A Political Economy of Personal Information. Critical Studies in Communication and in the Cultural Industries. Westview Press, Inc., 5500 Central Avenue, Boulder, CO 80301-2877 (paperback: ISBN-0-8133-1657-X, $18.95; hardcover: ISBN-0-8133-1656-1, $61.50)., 1993.

Fourcade, Marion, and Kieran Healy. “Seeing like a market.” Socio-Economic Review 15.1 (2016): 9-29.

Jensen, Robert. “The digital provide: Information (technology), market performance, and welfare in the South Indian fisheries sector.” The quarterly journal of economics 122.3 (2007): 879-924.

Mulligan, Deirdre K. and Griffin, Daniel S. “Rescripting Search to Respect the Right to Truth.” 2 GEO. L. TECH. REV. 557 (2018)

Omi and Winant on economic theories of race

Speaking of economics and race, Chapter 2 of Omi and Winant (2014), titled “Class”, is about economic theories of race. These are my notes on it

Throughout this chapter, Omi and Winant seem preoccupied with whether and to what extent economic theories of race fall on the left, center, or right within the political spectrum. This is despite their admission that there is no absolute connection between the variety of theories and political orientation, only general tendencies. One presumes when reading it that they are allowing the reader to find themselves within that political alignment and filter their analysis accordingly. I will as much as possible leave out these cues, because my intention in writing these blog posts is to encourage the reader to make an independent, informed judgment based on the complexity the theories reveal, as opposed to just finding ideological cannon fodder. I claim this idealistic stance as my privilege as an obscure blogger with no real intention of ever being read.

Omi and Winant devote this chapter to theories of race that attempt to more or less reduce the phenomenon of race to economic phenomena. They outline three varieties of class paradigms for race:

  • Market relations theories. These tend to presuppose some kind theory of market efficiency as an ideal.
  • Stratification theories. These are vaguely Weberian, based on classes as ‘systems of distribution’.
  • Product/labor based theories. These are Marxist theories about conflicts over social relations of production.

For market relations theories, markets are efficient, racial discrimination and inequality isn’t, and so the theory’s explicandum is what market problems are leading to the continuation of racial inequalities and discrimination. There are a few theories on the table:

  • Irrational prejudice. This theory says that people are racially prejudiced for some stubborn reason and so “limited and judicious state interventionism” is on the table. This was the theory of Chicago economist Gary Becker, who is not to be confused with the Chicago sociologist Howard Becker, whose intellectual contributions were totally different. Racial prejudice unnecessarily drives up labor costs and so eventually the smart money will become unprejudiced.
  • Monopolistic practices. The idea here is that society is structured in the interest of whites, who monopolize certain institutions and can collect rents from their control of resources. Jobs, union membership, favorably located housing, etc. are all tied up in this concept of race. Extra-market activity like violence is used to maintain these monopolies. This theory, Omi and Winant point out, is sympatico with white privilege theories, as well as nation-based analyses of race (cf. colonialism).
  • Disruptive state practices. This view sees class/race inequality as the result of state action of some kind. There’s a laissez-faire critique which argues that minimum wage and other labor laws, as well as affirmative action, entrench race and prevent the market from evening things out. Doing so would benefit both capital owners and people of color according to this theory. There’s a parallel neo-Marxist theory that says something similar, interestingly enough.

It must be noted that in the history of the United States, especially before the Civil Rights era, there absolutely was race-based state intervention on a massive scale and this was absolutely part of the social construction of race. So there hasn’t been a lot of time to test out the theory that market equilibrium without racialized state policies results in racial equality.

Omi and Winant begin to explicate their critique of “colorblind” theories in this chapter. They characterize “colorblind” theories as individualistic in principle, and opposed to the idea of “equality of result.” This is the familiar disparate treatment vs. disparate impact dichotomy from the interpretation of nondiscrimination law. I’m now concerned that this, which appears to be the crux of the problem of addressing contests over racial equality between the center and the left, will not be resolved even after O&W’s explication of it.

Stratification theory is about the distribution of resources, though understood in a broader sense than in a narrow market-based theory. Resources include social network ties, elite recruitment, and social mobility. This is the kind of theory of race an symbolic interactionist sociologist of class can get behind. Or a political scientist’s: the relationship between the elites and the masses, as well as the dynamics of authority systems, are all part of this theory, according to Omi and Winant. One gets the sense that of the class based theories, this nuanced and nonreductivist one is favored by the authors … except for the fascinating critique that these theories will position race vs. class as two dimensions of inequality, reifying them in their analysis, whereas “In experiential terms, of course, inequality is not differentiated by race or class.”

The phenomenon that there is a measurable difference in “life chances” between races in the United States is explored by two theorists to which O&W give ample credit: William J Wilson and Douglas Massey.

Wilson’s major work in 1978, The Declining Significance of Race, tells a long story of race after the Civil War and urbanization that sounds basically correct to me. It culminates with the observation that there are now elite and middle-class black people in the United States due to the uneven topology of reforms but that ‘the massive black “underclass” was relegated to permanent marginality’. He argued that race was no longer a significant linkage between these two classes, though Omi and Winant criticize this view, arguing that there is fragility to the middle-class status for blacks because of public sector job losses. His view that class divides have superseded racial divides is his most controversial claim and so perhaps what he is known best for. He advocated for a transracial alliance within the Democratic party to contest the ‘racial reaction’ to Civil Rights, which at this point was well underway with Nixon’s “southern strategy”. The political cleavages along lines of partisan racial alliance are familiar to us in the United States today. Perhaps little has changed.
He called for state policies to counteract class cleavages, such as day care services to low-income single mothers. These calls “went nowhere” because Democrats were unwilling to face Republican arguments against “giveaways” to “welfare queens”. Despite this, Omi and Winant believe that Wilson’s views converge with neoconservative views because he doesn’t favor public sector jobs as a solution to racial inequality; more recently, he’s become a “culture of poverty” theorist (because globalization reduces the need for black labor in the U.S.) and believes in race neutral policies to overcome urban poverty. The relationship between poverty and race is incidental to Wilson, which I suppose makes him ‘colorblind” in O&W’s analysis.

Massey’s work, which is also significantly reviewed in this chapter, deals with immigration and Latin@s. There’s a lot there, so I’ll cut to the critique of his recent book, Categorically Unequal (2008), in which Massey unites his theories of anti-black and anti-brown racism into a comprehensive theory of racial stratification based on ingrained, intrinsic, biological processes of prejudice. Naturally, to Omi and Winant, the view that there’s something biological going on is “problematic”. They (being quite mainstream, really) see this as tied to the implicit bias literature but think that there’s a big difference from implicit bias due to socialization vs. over permanent hindbrain perversity. This is apparently taken up again in their Chapter 4.

Omi and Winant’s final comment is that these stratification theories deny agency and can’t explain how “egalitarian or social justice-oriented transformations could ever occur, in the past, present, or future.” Which is, I suppose, bleak to the anti-racist activists Omi and Winant are implicitly aligned with. Which does raise the possibility that what O&W are really up to in advocating a hard line on the looser social construction of race is to keep the hope of possibility of egalitarian transformation alive. It had not occurred to me until just now that their sensitivity to the idea that implicit bias may be socially trained vs. being a more basic and inescapable part of psychology, a sensitivity which is mirrored elsewhere in society, is due to this concern for the possibility and hope for equality.

The last set of economic theories considered in this chapter are class-conflict theories, which are rooted in a Marxist conception of history as reducible to labor-production relations and therefore class conflict. There are two different kinds of Marxist theory of race. There are labor market segmentation theories, led by Michael Reich, a labor economist at Berkeley. According to this research, when the working class unifies across racial lines, it increases its bargaining power and so can get better wages in its negotiations with capital. So the capitalist in this theory may want to encourage racial political divisions even if they harbor no racial prejudices themselves. “Workers of the world unite!” is the message of these theories. An alternative view is split labor market theory, which argues that under economic pressure the white working class would rather throw other races under the bus than compete with them economically. Political mobilization for a racially homogenous, higher paid working class is then contested by both capitalists and lower paid minority workers.


Omi and Winant respect the contributions of these theories but think that trying to reduce race to economic relations ultimately fails. This is especially true for the market theorists, who always wind up introducing race as an non-economic, exogenous variable to avoid inequalities in the market.

The stratification theories are perhaps more realistic and complex.

I’m most surprised at how the class-conflict based theories are reflected in what for me are the major lenses into the zeitgeist of contemporary U.S. politics. This may be because I’m very disproportionately surrounded by Marxist-influenced intellectuals. But it is hard to miss the narrative that the white working class has rejected the alliance between neoliberal capital and low-wage immigrant and minority labor. Indeed, it is arguably this latter alliance that Nancy Fraser has called neoliberalism. This conflict accords with the split labor market theory. Fraser and other hopeful socialist types argue that a triumph over identity differences is necessary to realize racial conflicts in the working class play into the hands of capitalists, not white workers. It is very odd that this ideological question is not more settled empirically. It may be that the whole framing is perniciously oversimplified, and that really you have to talk about things in a more nuanced way to get real headway.

Unless of course there isn’t any such real hope. This was an interesting part of the stratification theory: the explanation that included an absence of agency. I used to study lots and lots of philosophy, and in philosophy it’s a permissible form of argument to say, “This line of reasoning, if followed to its conclusion, leads to an appalling and untenable conclusion, one that could never be philosophically satisfying. For that reason, we reject it and consider a premise to be false.” In other words, in philosophy you are allowed to be motivated by the fact that a philosophical stance is life negating or self-defeating in some way. I wonder if that is true of sociology of race. I also wonder whether bleak conclusions are necessary even if you deny the agency of racial minorities in the United States to liberate themselves on their own steam. Now there’s globalization, and earlier patterns of race may well be altered by forces outside of it. This is another theme in contemporary political discourse.

Once again Omi and Winant have raised the specter of “colorblind” policies without directly critiquing them. The question seems to boil down to whether or not the mechanisms that reproduce racial inequality can be mitigated better by removing those mechanisms that are explicitly racial or not. If part of the mechanism is irrational prejudice due to some hindbrain tick, then there may be grounds for a systematic correction of that tick. But that would require a scientific conclusion about the psychology of race that identifies a systematic error. If the error is rather interpreting an empirical inequality due to racialized policies as an essentialized difference, then that can be partially corrected by reducing the empirical inequality in fact.

It is in fact because I’m interested in what kinds of algorithms would be beneficial interventions in the process of racial formation that I’m reading Omi and Winant so closely in the first place.