Category: politics

Reading O’Neil’s Weapons of Math Destruction

I probably should have already read Cathy O’Neil’s Weapons of Math Destruction. It was a blockbuster of the tech/algorithmic ethics discussion. It’s written by an accomplished mathematician, which I admire. I’ve also now seen O’Neil perform bluegrass music twice in New York City and think her band is great. At last I’ve found a copy and have started to dig in.

On the other hand, as is probably clear from other blog posts, I have a hard time swallowing a lot of the gloomy political work that puts the role of algorithms in society in such a negative light. I encounter is very frequently, and every time feel that some misunderstanding must have happened; something seems off.

It’s very clear that O’Neil can’t be accused of mathophobia or not understanding the complexity of the algorithms at play, which is an easy way to throw doubt on the arguments of some technology critics. Yet perhaps because it’s a popular book and not an academic work of Science and Technology Studies, I haven’t it’s arguments parsed through and analyzed in much depth.

This is a start. These are my notes on the introduction.

O’Neil describes the turning point in her career where she soured on math. After being an academic mathematician for some time, O’Neil went to work as a quantitative analyst for D.E. Shaw. She saw it as an opportunity to work in a global laboratory. But then the 2008 financial crisis made her see things differently.

The crash made it all too clear that mathematics, once my refuge, was not only deeply entangled in the world’s problems but also fueling many of them. The housing crisis, the collapse of major financial institutions, the rise of unemployment–all had been aided and abetted by mathematicians wielding magic formulas. What’s more, thanks to the extraordinary powers that I loved so much, math was able to combine with technology to multiply the chaos and misfortune, adding efficiency and scale to systems I now recognized as flawed.

O’Neil, Weapons of Math Destruction, p.2

As an independent reference on the causes of the 2008 financial crisis, which of course has been a hotly debated and disputed topic, I point to Sassen’s 2017 “Predatory Formations” article. Indeed, the systems that developed the sub-prime mortgage market were complex, opaque, and hard to regulate. Something went seriously wrong there.

But was it mathematics that was the problem? This is where I get hung up. I don’t understand the mindset that would attribute a crisis in the financial system to the use of abstract, logical, rigorous thinking. Consider the fact that there would not have been a financial crisis if there had not been a functional financial services system in the first place. Getting a mortgage and paying them off, and the systems that allow this to happen, all require mathematics to function. When these systems operate normally, they are taken for granted. When they suffer a crisis, when the system fails, the mathematics takes the blame. But a system can’t suffer a crisis if it didn’t start working rather well in the first place–otherwise, nobody would depend on it. Meanwhile, the regulatory reaction to the 2008 financial crisis required, of course, more mathematicians working to prevent the same thing from happening again.

So in this case (and I believe others) the question can’t be, whether mathematics, but rather which mathematics. It is so sad to me that these two questions get conflated.

O’Neil goes on to describe a case where an algorithm results in a teacher losing her job for not adding enough value to her students one year. An analysis makes a good case that the cause of her students’ scores not going up is that in the previous year, the students’ scores were inflated by teachers cheating the system. This argument was not consider conclusive enough to change the administrative decision.

Do you see the paradox? An algorithm processes a slew of statistics and comes up with a probability that a certain person might be a bad hire, a risky borrower, a terrorist, or a miserable teacher. That probability is distilled into a score, which can turn someone’s life upside down. And yet when the person fights back, “suggestive” countervailing evidence simply won’t cut it. The case must be ironclad. The human victims of WMDs, we’ll see time and again, are held to a far higher standard of evidence than the algorithms themselves.

O’Neil, WMD, p.10

Now this is a fascinating point, and one that I don’t think has been taken up enough in the critical algorithms literature. It resonates with a point that came up earlier, that traditional collective human decision making is often driven by agreement on narratives, whereas automated decisions can be a qualitatively different kind of collective action because they can make judgments based on probabilistic judgments.

I have to wonder what O’Neil would argue the solution to this problem is. From her rhetoric, it seems like her recommendation must be prevent automated decisions from making probabilistic judgments. In other words, one could raise the evidenciary standard for algorithms so that they we equal to the standards that people use with each other.

That’s an interesting proposal. I’m not sure what the effects of it would be. I expect that the result would be lower expected values of whatever target was being optimized for, since the system would not be able to “take bets” below a certain level of confidence. One wonders if this would be a more or less arbitrary system.

Sadly, in order to evaluate this proposal seriously, one would have to employ mathematics. Which is, in O’Neil’s rhetoric, a form of evil magic. So, perhaps it’s best not to try.

O’Neil attributes the problems of WMD’s to the incentives of the data scientists building the systems. Maybe they know that their work effects people, especially the poor, in negative ways. But they don’t care.

But as a rule, the people running the WMD’s don’t dwell on these errors. Their feedback is money, which is also their incentive. Their systems are engineered to gobble up more data fine-tune their analytics so that more money will pour in. Investors, of course, feast on these returns and shower WMD companies with more money.

O’Neil, WMD, p.13

Calling out greed as the problem is effective and true in a lot of cases. I’ve argued myself that the real root of the technology ethics problem is capitalism: the way investors drive what products get made and deployed. This is a worthwhile point to make and one that doesn’t get made enough.

But the logical implications of this argument are off. Suppose it is true that “as a rule”, the makers of algorithms that do harm are made by people responding to the incentives of private capital. (IF harmful algorithm, THEN private capital created it.) That does not mean that there can’t be good algorithms as well, such as those created in the public sector. In other words, there are algorithms that are not WMDs.

So the insight here has to be that private capital investment corrupts the process of designing algorithms, making them harmful. One could easily make the case that private capital investment corrupts and makes harmful many things that are not algorithmic as well. For example, the historic trans-Atlantic slave trade was a terribly evil manifestation of capitalism. It did not, as far as I know, depend on modern day computer science.

Capitalism here looks to be the root of all evil. The fact that companies are using mathematics is merely incidental. And O’Neil should know that!

Here’s what I find so frustrating about this line of argument. Mathematical literacy is critical for understanding what’s going on with these systems and how to improve society. O’Neil certainly has this literacy. But there are many people who don’t have it. There is a power disparity there which is uncomfortable for everybody. But while O’Neil is admirably raising awareness about how these kinds of technical systems can and do go wrong, the single-minded focus and framing risks giving people the wrong idea that these intellectual tools are always bad or dangerous. That is not a solution to anything, in my view. Ignorance is never more ethical than education. But there is an enormous appetite among ignorant people for being told that it is so.


O’Neil, Cathy. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books, 2017.

Sassen, Saskia. “Predatory Formations Dressed in Wall Street Suits and Algorithmic Math.” Science, Technology and Society22.1 (2017): 6-20.


State regulation and/or corporate self-regulation

The dust from the recent debates about whether regulation or industrial self-regulation in the data/tech/AI industry appears to be settling. The smart money is on regulation and self-regulation being complementary for attaining the goal of an industry dominated by responsible actors. This trajectory leads to centralized corporate power that is lead from the top; it is a Hamiltonian not Jeffersonian solution, in Pasquale’s terms.

I am personally not inclined towards this solution. But I have been convinced to see it differently after a conversation today about environmentally sustainable supply chains in food manufacturing. Nestle, for example, has been internally changing its sourcing practices to more sustainable chocolate. It’s able to finance this change from its profits, and when it does change its internal policy, it operates on a scale that’s meaningful. It is able to make this transition in part because non-profits, NGO’s, and farmers cooperatives lay through groundwork for sustainable sourcing external to the company. This lowers the barriers to having Nestle switch over to new sources–they have already been subsidized through philanthropy and international aid investments.

Supply chain decisions, ‘make-or-buy’ decisions, are the heart of transaction cost economics (TCE) and critical to the constitution of institutions in general. What this story about sustainable sourcing tells us is that the configuration of private, public, and civil society institutions is complex, and that there are prospects for agency and change in the reconfiguration of those relationships. This is no different in the ‘tech sector’.

However, this theory of economic and political change is not popular; it does not have broad intellectual or media appeal. Why?

One reason may be because while it is a critical part of social structure, much of the supply chain is in the private sector, and hence is opaque. This is not a matter of transparency or interpretability of algorithms. This is about the fact that private institutions, by virtue of being ‘private’, do not have to report everything that they do and, probably, shouldn’t. But since so much of what is done by the massive private sector is of public import, there’s a danger of the privatization of public functions.

Another reason why this view of political change through the internal policy-making of enormous private corporations is unpopular is because it leaves decision-making up to a very small number of people–the elite managers of those corporations. The real disparity of power involved in private corporate governance means that the popular attitude towards that governance is, more often than not, irrelevant. Even less so that political elites, corporate elites are not accountable to a constituency. They are accountable, I suppose, to their shareholders, which have material interests disconnected from political will.

This disconnected shareholder will is one of the main reasons why I’m skeptical about the idea that large corporations and their internal policies are where we should place our hopes for moral leadership. But perhaps what I’m missing is the appropriate intellectual framework for how this will is shaped and what drives these kinds of corporate decisions. I still think TCE might provide insights that I’ve been missing. But I am on the lookout for other sources.

“the privatization of public functions”

An emerging theme from the conference on Trade Secrets and Algorithmic Systems was that legal scholars have become concerned about the privatization of public functions. For example, the use of proprietary risk assessment tools instead of the discretion of judges who are supposed to be publicly accountable is a problem. More generally, use of “trade secrecy” in court settings to prevent inquiry into software systems is bogus and moves more societal control into the realm of private ordering.

Many remedies were proposed. Most involved some kind of disclosure and audit to experts. The most extreme form of disclosure is making the software and, where it’s a matter of public record, training data publicly available.

It is striking to me to be encountering the call for government use of open source systems because…this is not a new issue. The conversation about federal use of open source software was alive and well over five years ago. Then, the arguments were about vendor lock-in; now, they are about accountability of AI. But the essential problem of whether core governing logic should be available to public scrutiny, and the effects of its privatization, have been the same.

If we are concerned with the reliability of a closed and large-scale decision-making process of any kind, we are dealing with problems of credibility, opacity, and complexity. The prospects of an efficient market for these kinds of systems are dim. These market conditions are the conditions of sustainability of open source infrastructure. Failures in sustainability are manifest as software vulnerabilities, which are one of the key reasons why governments are warned against OSS now, though the process of measurement and evaluation of OSS software vulnerability versus proprietary vulnerabilities is methodologically highly fraught.

bodies and liberal publics in the 20th century and today

I finally figured something out, philosophically, that has escaped me for a long time. I feel a little ashamed that it’s taken me so long to get there, since it’s something I’ve been told in one way or another many times before.

Here is the set up: liberalism is justified by universal equivalence between people. This is based in the Enlightenment idea that all people have something in common that makes them part of the same moral order. Recognizing this commonality is an accomplishment of reason and education. Whether this shows up in Habermasian discourse ethics, according to which people may not reason about politics from their personal individual situation, or in the Rawlsian ‘veil of ignorance’, in which moral precepts are intuitively defended under the presumption that one does not know who or where one will be, liberal ideals always require that people leave something out, something that is particular to them. What gets left out is people’s bodies–meaning both their physical characteristics and more broadly their place in lived history. Liberalism was in many ways a challenge to a moral order explicitly based on the body, one that took ancestry and heredity very seriously. So much a part of aristocratic regime was about birthright and, literally, “good breeding”. The bourgeois class, relatively self-made, used liberalism to level the moral playing field with the aristocrats.

The Enlightenment was followed by a period of severe theological and scientific racism that was obsessed with establishing differences between people based on their bodies. Institutions that were internally based on liberalism could then subjugate others, by creating an Other that was outside the moral order. Equivalently, sexism too.
Social Darwinism was a threat to liberalism because it threatened to bring back a much older notion of aristocracy. In WWII, the Nazis rallied behind such an ideology and were defeated in the West by a liberal alliance, which then established the liberal international order.

I’ve got to leave out the Cold War and Communism here for a minute, sorry.

Late modern challenges to the liberal ethos gained prominence in activist circles and the American academy during and following the Civil Rights Movement. These were and continue to be challenges because they were trying to bring bodies back into the conversation. The problem is that a rules-based order that is premised on the erasure of differences in bodies is going to be unable to deal with the political tensions that precisely do come from those bodily differences. Because the moral order of the rules was blind to those differences, the rules did not govern them. For many people, that’s an inadequate circumstance.

So here’s where things get murky for me. In recent years, you have had a tension between the liberal center and the progressive left. The progressive left reasserts the political importance of the body (“Black Lives Matter”), and assertions of liberal commonality (“All Lives Matter”) are first “pushed” to the right, but then bump into white supremacy, which is also a reassertion of the political importance of the body, on the far right. It’s worth mention Piketty, here, I think, because to some extent that also exposed how under liberal regimes the body has secretly been the organizing principle of wealth through the inheritance of private property.

So what has been undone is the sense, necessary for liberalism, that there is something that everybody has in common which is the basis for moral order. Now everybody is talking about their bodily differences.

That is on the one hand good because people do have bodily differences and those differences are definitely important. But it is bad because if everybody is questioning the moral order it’s hard to say that there really is one. We have today, I submit, a political nihilism crisis due to our inability to philosophically imagine a moral order that accounts for bodily difference.

This is about the Internet too!

Under liberalism, you had an idea that a public was a place people could come to agree on the rules. Some people thought that the Internet would become a gigantic public where everybody could get together and discuss the rules. Instead what happened was that the Internet became a place where everybody could discuss each other’s bodies. People with similar bodies could form counterpublics and realize their shared interests as body-classes. (This piece by David Weinberger critiquing the idea of an ‘echo chamber’ is inspiring.) Within these body-based counterpublics each form their own internal moral order whose purpose is to mobilize their body-interests against other kinds of bodies. I’m talking about both black lives matter and white supremacists here, radical feminists and MRA’s. They are all buffeting liberalism with their body interests.

I can’t say whether this is “good” or “bad” because the moral order is in flux. There is apparently no such thing as neutrality in a world of pervasive body agonism. That may be its finest criticism: body agonism is politically unstable. Body agonism leads to body anarchy.

I’ll conclude with two points. The first is that the Enlightenment view of people having something in common (their personhood, their rationality, etc.) which put them in the same moral order was an intellectual and institutional accomplishment. People do not naturally get outside themselves and put themselves in other people’s shoes; they have to be educated to do it. Perhaps there is a kernal of truth here about what moral education is that transcends liberal education. We have to ask whether today’s body agonism is an enlightened state relative to moral liberalism because it acknowledges a previously hidden descriptive reality of body difference and is no longer so naive, or if body agonism is a kind of ethical regress because it undoes moral education, reducing us to a more selfish state of nature, of body conflict, albeit in a world full of institutions based on something else entirely.

The second point is that there is an alternative to liberal order which appears to be alive and well in many places. This is an order that is not based on individual attitudes for legitimacy, but rather is more about the endurance of institutions for their own sake. I’m referring of course to authoritarianism. Without the pretense of individual equality, authoritarian regimes can focus on maintaining power on their own terms. Authoritarian regimes do not need to govern through moral order. U.S. foreign policy used to be based on the idea that such amoral governance would be shunned. But if body agonism has replaced the U.S. international moral order, we no longer have an ideology to export or enforce abroad.

General intelligence, social privilege, and causal inference from factor analysis

I came upon this excellent essay by Cosma Shalizi about how factor analysis has been spuriously used to support the scientific theory of General Intelligence (i.e., IQ). Shalizi, if you don’t know, is one of the best statisticians around. He writes really well and isn’t afraid to point out major blunders in things. He’s one of my favorite academics, and I don’t think I’m alone in this assessment.

First, a motive: Shalizi writes this essay because he thinks the scientific theory of General Intelligence, or a g factor that is some real property of the mind, is wrong. This theory is famous because (a) a lot of people DO believe in IQ as a real feature of the mind, and (b) a significant percentage of these people believe that IQ is hereditary and correlated with race, and (c) the ideas in (b) are used to justify pernicious and unjust social policy. Shalizi, being a principled statistician, appears to take scientific objection to (a) independently of his objection to (c), and argues persuasively that we can reject (a). How?

Shalizi’s point is that the general intelligence factor g is a latent variable that was supposedly discovered using a factor analysis of several different intelligence tests that were supposed to be independent of each other. You can take the data from these data sets and do a dimensionality reduction (that’s what factor analysis is) and get something that looks like a single factor, just as you can take a set of cars and do a dimensionality reduction and get something that looks like a single factor, “size”. The problem is that “intelligence”, just like “size”, can also be a combination of many other factors that are only indirectly associated with each other (height, length, mass, mass of specific components independent of each other, etc.). Once you have many different independent factors combining into one single reduced “dimension” of analysis, you no longer have a coherent causal story of how your general latent variable caused the phenomenon. You have, effectively, correlation without demonstrated causation and, moreover, the correlation is a construct of your data analysis method, and so isn’t really even telling you what correlations normally tell you.

To put it another way: the fact that some people seem to be generally smarter than other people can be due to thousands of independent factors that happen to combine when people apply themselves to different kinds of tasks. If some people were NOT seeming generally smarter than others, that would allow you to reject the hypothesis that there was general intelligence. But the mere presence of the aggregate phenomenon does not prove the existence of a real latent variable. In fact, Shalizi goes on to say, when you do the right kinds of tests to see if there really is a latent factor of ‘general intelligence’, you find that there isn’t any. And so it’s just the persistent and possibly motivated interpretation of the observational data that allows the stubborn myth of general intelligence to continue.

Are you following so far? If you are, it’s likely because you were already skeptical of IQ and its racial correlates to begin with. Now I’m going to switch it up though…

It is fairly common for educated people in the United States (for example) to talk about “privilege” of social groups. White privilege, male privilege–don’t tell me you haven’t at least heard of this stuff before; it is literally everywhere on the center-left news. Privilege here is considered to be a general factor that adheres in certain social groups. It is reinforced by all manner of social conditioning, especially through implicit bias in individual decision-making. This bias is so powerful it extends not to just cases of direct discrimination but also in cases where discrimination happens in a mediated way, for example through technical design. The evidence for these kinds of social privileging effects is obvious: we see inequality everywhere, and we can who is more powerful and benefited by the status quo and who isn’t.

You see where this is going now. I have the momentum. I can’t stop. Here it goes: Maybe this whole story about social privilege is as spuriously supported as the story about general intelligence? What if both narratives were over-interpretations of data that serve a political purpose, but which are not in fact based on sound causal inference techniques?

How could this be? Well, we might gather a lot of data about people: wealth, status, neighborhood, lifespan, etc. And then we could run a dimensionality reduction/factor analysis and get a significant factor that we could name “privilege” or “power”. Potentially that’s a single, real, latent variable. But also potentially it’s hundreds of independent factors spuriously combined into one. It would probably, if I had to bet on it, wind up looking a lot like the factor for “general intelligence”, which plays into the whole controversy about whether and how privilege and intelligence get confused. You must have heard the debates about, say, representation in the technical (or other high-status, high-paying) work force? One side says the smart people get hired; the other side say it’s the privileged (white male) people that get hired. Some jerk suggests that maybe the white males are smarter, and he gets fired. It’s a mess.

I’m offering you a pill right now. It’s not the red pill. It’s not the blue pill. It’s some other colored pill. Green?

There is no such thing as either general intelligence or group based social privilege. Each of these are the results of sloppy data compression over thousands of factors with a loose and subtle correlational structure. The reason why patterns of social behavior that we see are so robust against interventions is that each intervention can work against only one or two of these thousands of factors at a time. Discovering the real causal structure here is hard partly because the effect sizes are very small. Anybody with a simple explanation, especially a politically convenient explanation, is lying to you but also probably lying to themselves. We live in a complex world that resists our understanding and our actions to change it, though it can be better understood and changed through sound statistics. Most people aren’t bothering to do this, and that’s why the world is so dumb right now.

politics of business

This post is an attempt to articulate something that’s on the tip of my tongue, so bear with me.

Fraser has made the point that the politics of recognition and the politics of distribution are not the same. In her view, the conflict in the U.S. over recognition (i.e., or women, racial minorities, LGBTQ, etc. on the progressive side, and on the straight white male ‘majority’ on the reactionary side) has overshadowed the politics of distribution, which has been at a steady neoliberal status quo for some time.

First, it’s worth pointing out that in between these two political contests is a politics of representation, which may be more to the point. The claim here is that if a particular group is represented within a powerful organization–say, the government, or within a company with a lot of power such as a major financial institution or tech company–then that organization will use its power in a way that is responsive to the needs of the represented group.

Politics of representation are the link between recognition and distribution: the idea is that if “we” recognize a certain group, then through democratic or social processes members of that group will be lifted into positions of representative power, which then will lead to (re)distribution towards that group in the longer run.

I believe this is the implicit theory of social change at the heart of a lot of democratish movements today. It’s an interesting theory in part because it doesn’t seem to have any room for “good governance”, or broadly beneficial governance, or technocracy. There’s nothing deliberative about this form of democracy; it’s a tribal war-by-other-means. It is also not clear that this theory of social change based on demographic representation is any more effective at changing distributional outcomes than a pure politics of recognition, which we have reason to believhe is ineffectual.

Who do we expect to have power over distributional outcomes in our (and probably other) democracies? Realistically, it’s corporations. Businesses comprise most of the economic activity; businesses have the profits needed to reinvest in lobbying power for the sake of economic capture. So maybe if what we’re interested in is politics of distribution, we should stop trying to parse out the politics of recognition, with its deep dark rabbit hole of identity politics and the historical injustice and Jungian archetypal conflicts over the implications of the long arc of sexual maturity. These conversations do not seem to be getting anyone anywhere! It is, perhaps, fake news: not because the contents are fake, but because the idea that these issues are new is fake. They are perhaps just a lot of old issues stirred to conflagration by the feedback loops between social and traditional media.

If we are interested in the politics of distribution, let’s talk about something else, something that we all know must be more relevant, when it comes down to it, than the politics of recognition. I’m talking about the politics of business.

We have a rather complex economy with many competing business interests. Let’s assume that one of the things these businesses compete over is regulatory capture–their ability to influence economic policy in their favor.

When academics talk about neoliberal economic policy, they are often talking about those policies that benefit the financial sector and big businesses. But these big businesses are not always in agreement.

Take, for example, the steel tariff proposed by the Trump administration. There is no blunter example of a policy that benefits some business interests–U.S. steelmakers–and not others–U.S. manufacturers of steel-based products.

It’s important from the perspective of electoral politics to recognize that the U.S. steelmakers are a particular set of people who live in particular voting districts with certain demographics. That’s because, probably, if I am a U.S. steelworker, I will vote in the interest of my industry. Just as if I am a U.S. based urban information worker at an Internet company, I will vote in the interest of my company, which in my case would mean supporting net neutrality. If I worked for AT&T, I would vote against net neutrality, which today means I would vote Republican.

It’s an interesting fact that AT&T employs a lot more people than Google and (I believe this is the case, though I don’t know where to look up the data) that they are much more geographically distributed that Google because, you know, wires and towers and such. Which means that AT&T employees will be drawn from more rural, less diverse areas, giving them an additional allegiance to Republican identity politics.

You must see where I’m getting at. Assume that the main driver of U.S. politics is not popular will (which nobody really believes, right?) and is in fact corporate interests (which basically everybody admits, right?). In that case the politics of recognition will not be determining anything; rather it will be a symptom, an epiphenomenon, of an underlying politics of business. Immigration of high-talent foreigners then becomes a proxy issue for the economic battle between coastal tech companies and, say, old energy companies which have a much less geographically mobile labor base. Nationalism, or multinationalism, becomes a function of trade relations rather than a driving economic force in its own right. (Hence, Russia remains an enemy of the U.S. largely because Putin paid off all its debt to the U.S. and doesn’t owe it any money, unlike many of its other allies around the world.)

I would very much like to devote myself better to the understanding of politics of business because, as I’ve indicated, I think the politics of recognition have become a huge distraction.

Moral individualism and race (Barabas, Gilman, Deenan)

One of my favorite articles presented at the recent FAT* 2018 conference was Barabas et al. on “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment” (link). To me, this was the correct response to recent academic debate about the use of actuarial risk-assessment in determining criminal bail and parole rates. I had a position on this before the conference which I drafted up here; my main frustration with the debate had been that it had gone unquestioned why bail and parole rates are based on actuarial prediction of recidivism in the first place, given that rearrest rates are so contingent on social structural factors such as whether or not police are racist.

Barabas et al. point out that there’s an implicit theory of crime behind the use of actuarial risk assessments. In that theory of crime, there are individual “bad people” and “good people”. “Bad people” are more likely to commit crimes because of their individual nature, and the goal of the criminal policing system is to keep bad people from committing crimes by putting them in prison. This is the sort of theory that, even if it is a little bit true, is also deeply wrong, and so we should probably reassess the whole criminal justice system as a result. Even leaving aside the important issue of whether “recidivism” is interpreted as reoffense or rearrest rate, it is socially quite dangerous to see probability of offense as due to the specific individual moral character of a person. One reason why this is dangerous is that if the conditions for offense are correlated with the conditions for some sort of unjust desperation, then we risk falsely justifying an injustice with the idea that the bad things are only happening to bad people.

I’d like to juxtapose this position with a couple others that may on the surface appear to be in tension with it.

Nils Gilman’s new piece on “The Collapse of Racial Liberalism” is a helpful account of how we got where we are as an American polity. True to the title, Gilman’s point is that there was a centrist consensus on ‘racial liberalism’ that it reached its apotheosis in the election of Obama and then collapsed under its one contradictions, getting us where we are today.

By racial liberalism, I mean the basic consensus that existed across the mainstream of both political parties since the 1970s, to the effect that, first, bigotry of any overt sort would not be tolerated, but second, that what was intolerable was only overt bigotry—in other words, white people’s definition of racism. Institutional or “structural” racism—that is, race-based exclusions that result from deep social habits such as where people live, who they know socially, what private organizations they belong to, and so on—were not to be addressed. The core ethic of the racial liberal consensus was colorblind individualism.

Bill Clinton was good at toeing the line of racial liberalism, and Obama, as a black meritocratic elected president, was its culmination. But:

“Obama’s election marked at once the high point and the end of a particular historical cycle: a moment when the realization of a particular ideal reveals the limits of that ideal.”

The limit of the ideal is, of course, that all the things not addressed–“race-based exclusions that result from deep social habits such as where people live, who they know socially, what private organizations they belong to, and so on”–matter, and result in, for example, innocent black guys getting shot disproportionately by police even when there is a black meritocratic sitting as president.

And interesting juxtaposition here is that in both cases discussed so far, we have a case of a system that is reaching its obsolescence due to the contradictions of individualism. In the case of actuarial policing (as it is done today; I think a properly sociological version of actuarial policing could be great), there’s the problem of considering criminals as individuals whose crimes are symptoms of their individual moral character. The solution to crime is to ostracize and contain the criminals by, e.g., putting them in prison. In the case of racial liberalism, there’s the problem of considering bigotry a symptom of individual moral character. The solution to the bigotry is to ostracize and contain the bigots by teaching them that it is socially unacceptable to express bigotry and keeping the worst bigots out of respectable organizations.

Could it be that our broken theories of both crime and bigotry both have the same problem, which is the commitment to moral individualism, by which I mean the theory that it’s individual moral character that is the cause of and solution to these problems? If a case of individual crime and individual bigotry is the result of, instead of an individual moral failing, a collective action problem, what then?

I still haven’t looked carefully into Deenan’s argument (see notes here), but I’m intrigued that his point may be that the crisis of liberalism may be, at its root, a crisis of individualism. Indeed, Kantian views of individual autonomy are really nice but they have not stood the test of time; I’d say the combined works of Haberams, Foucault, and Bourdieu have each from very different directions developed Kantian ideas into a more sociological frame. And that’s just on the continental grand theory side of the equation. I have not followed up on what Anglophone liberal theory has been doing, but I suspect that it has been going the same way.

I am wary, as I always am, of giving too much credit to theory. I know, as somebody who has read altogether too much of it, what little use it actually is. However, the notion of political and social consensus is one that tangibly effects my life these days. For this reason, it’s a topic of great personal interest.

One last point, that’s intended as constructive. It’s been argued that the appeal of individualism is due in part to the methodological individualism of rational choice theory and neoclassical economic theory. Because we can’t model economic interactions on anything but an individualistic level, we can’t design mechanisms or institutions that treat individual activity as a function of social form. This is another good reason to take seriously computational modeling of social forms.


Barabas, Chelsea, et al. “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment.” arXiv preprint arXiv:1712.08238 (2017).

Deneen, Patrick J. Why Liberalism Failed. Yale University Press, 2018.

Gilman, Nils. “The Collapse of Racial Liberalism.” The American Interest (2018).

Notes on Deenan, “Why Liberalism Failed”, Foreward

I’ve begun reading the recently published book, Why Liberalism Failed (2018), by Patrick Deenan. It appears to be making some waves in the political theory commentary. The author claims that it was 10 years in the making but was finished three weeks before the 2016 presidential election, which suggests that the argument within it is prescient.

I’m not far in yet.

There is an intriguing forward from James Davison Hunter and John M. Owen IV, the editors. Their framing of the book is surprisingly continental:

  • They declare that liberalism has arrived at its “legitimacy crisis”, a Habermasian term.
  • They claim that the core contention of the book is a critique of the contradictions within Immanuel Kant’s view of individual autonomy.
  • They compare Deenan with other “radical” critics of liberalism, of which they name: Marx, the Frankfurt School, Foucault, Nietzsche, Schmitt, and the Catholic Church.

In search of a litmus-test like clue as to where in the political spectrum the book falls, I’ve found this passage in the Foreward:

Deneen’s book is disruptive not only for the way it links social maladies to liberalism’s first principles, but also because it is difficult to categorize along our conventional left-right spectrum. Much of what he writes will cheer social democrats and anger free-market advocates; much else will hearten traditionalists and alienate social progressives.

Well, well, well. If we are to fit Deenan’s book into the conceptual 2-by-2 provided in Fraser’s recent work, it appears that Deenan’s political theory is a form of reactionary populism, rejecting progressive neoliberalism. In other words, the Foreward evinces that Deenan’s book is a high-brow political theory contribution that weighs in favor of the kind of politics that has been heretofore only articulated by intellectual pariahs.

The therapeutic ethos in progressive neoliberalism (Fraser and Furedi)

I’ve read two pieces recently that I found helpful in understanding today’s politics, especially today’s identity politics, in a larger context.

The first is Nancy Fraser’s “From Progressive Neoliberalism to Trump–and Beyond” (link). It portrays the present (American but also global) political moment as a “crisis of hegemony”, using Gramscian terms, for which the presidency of Donald Trump is a poster child. It’s main contribution is to point out that the hegemony that’s been in crisis is a hegemony of progressive neoliberalism, which sounds like an oxymoron but, Fraser argues, isn’t.

Rather, Fraser explains a two-dimensional political spectrum: there are politics of distribution, and there are politics of recognition.

To these ideas of Gramsci, we must add one more. Every hegemonic bloc embodies a set of assumptions about what is just and right and what is not. Since at least the mid-twentieth century in the United States and Europe, capitalist hegemony has been forged by combining two different aspects of right and justice—one focused on distribution, the other on recognition. The distributive aspect conveys a view about how society should allocate divisible goods, especially income. This aspect speaks to the economic structure of society and, however obliquely, to its class divisions. The recognition aspect expresses a sense of how society should apportion respect and esteem, the moral marks of membership and belonging. Focused on the status order of society, this aspect refers to its status hierarchies.

Fraser’s argument is that neoliberalism is a politics of distribution–it’s about using the market to distribute goods. I’m just going to assume that anybody reading this has a working knowledge of what neoliberalism means; if you don’t I recommend reading Fraser’s article about it. Progressivism is a politics of recognition that was advanced by the New Democrats. Part of its political potency been its consistency with neoliberalism:

At the core of this ethos were ideals of “diversity,” women’s “empowerment,” and LGBTQ rights; post-racialism, multiculturalism, and environmentalism. These ideals were interpreted in a specific, limited way that was fully compatible with the Goldman Sachsification of the U.S. economy…. The progressive-neoliberal program for a just status order did not aim to abolish social hierarchy but to “diversify” it, “empowering” “talented” women, people of color, and sexual minorities to rise to the top. And that ideal was inherently class specific: geared to ensuring that “deserving” individuals from “underrepresented groups” could attain positions and pay on a par with the straight white men of their own class.

A less academic, more Wall Street Journal reading member of the commentariat might be more comfortable with the terms “fiscal conservativism” and “social liberalism”. And indeed, Fraser’s argument seems mainly to be that the hegemony of the Obama era was fiscally conservatism but socially liberal. In a sense, it was the true libertarians that were winning, which is an interesting take I hadn’t heard before.

The problem, from Frasers perspective, is that neoliberalism concentrates wealth and carries the seeds of its own revolution, allowing for Trump to run on a combination of reactionary politics of recognition (social conservativism) with a populist politics of distribution (economic liberalism: big spending and protectionism). He won, and then sold out to neoliberalism, giving us the currently prevailing combination of neoliberalism and reactionary social policy. Which, by the way, we would be calling neoconservatism if it were 15 years ago. Maybe it’s time to resuscitate this term.

Fraser thinks the world would be a better place if progressive populists could establish themselves as an effective counterhegemonic bloc.

The second piece I’ve read on this recently is Frank Furedi’s “The hidden history of identity politics” (link). Pairing Fraser with Furedi is perhaps unlikely because, to put it bluntly, Fraser is a feminist and Furedi, as far as I can tell from this one piece, isn’t. However, both are serious social historians and there’s a lot of overlap in the stories they tell. That is in itself interesting from a scholarly perspective of one trying to triangulate an accurate account of political history.

Furedi’s piece is about “identity politics” broadly, including both its right wing and left wing incarnations. So, we’re talking about what Fraser calls the politics of recognition here. On a first pass, Furedi’s point is that Enlightenment universalist values have been challenged by both right and left wing identity politics since the late 18th century Romantic nationalist movements in Europe, which led to World Wars and the holocaust. Maybe, Furedi’s piece suggests, abandoning Enlightenment universalist values was a bad idea.

Although expressed through a radical rhetoric of liberation and empowerment, the shift towards identity politics was conservative in impulse. It was a sensibility that celebrated the particular and which regarded the aspiration for universal values with suspicion. Hence the politics of identity focused on the consciousness of the self and on how the self was perceived. Identity politics was, and continues to be, the politics of ‘it’s all about me’.

Strikingly, Furedi’s argument is that the left took the “cultural turn” into recognition politics essentially because of its inability to maintain a left-wing politics of redistribution, and that this happened in the 70’s. But this in turn undermined the cause of the economic left. Why? Because economic populism requires social solidarity, while identity politics is necessarily a politics of difference. Solidarity within an identity group can cause gains for that identity group, but at the expense of political gains that could be won with an even more unified popular political force.

The emergence of different identity-based groups during the 1970s mirrored the lowering of expectations on the part of the left. This new sensibility was most strikingly expressed by the so-called ‘cultural turn’ of the left. The focus on the politics of culture, on image and representation, distracted the left from its traditional interest in social solidarity. And the most significant feature of the cultural turn was its sacralisation of identity. The ideals of difference and diversity had displaced those of human solidarity.

So far, Furedi is in agreement with Fraser that hegemonic neoliberalism has been the status quo since the 70’s, and that the main political battles have been over identity recognition. Furedi’s point, which I find interesting, is that these battles over identity recognition undermine the cause of economic populism. In short, neoliberals and neocons can use identity to divide and conquer their shared political opponents and keep things as neo- as possible.

This is all rather old news, though a nice schematic representation of it.

Where Furedi’s piece gets interesting is where it draws out the next movements in identity politics, which he describes as the shift from it being about political and economic conditions into a politics of first victimhood and then a specific therapeutic ethos.

The victimhood move grounded the politics of recognition in the authoritative status of the victim. While originally used for progresssive purposes, this move was adopted outside of the progressive movement as early as 1980’s.

A pervasive sense of victimisation was probably the most distinct cultural legacy of this era. The authority of the victim was ascendant. Sections of both the left and the right endorsed the legitimacy of the victim’s authoritative status. This meant that victimhood became an important cultural resource for identity construction. At times it seemed that everyone wanted to embrace the victim label. Competitive victimhood quickly led to attempts to create a hierarchy of victims. According to a study by an American sociologist, the different movements joined in an informal way to ‘generate a common mood of victimisation, moral indignation, and a self-righteous hostility against the common enemy – the white male’ (5). Not that the white male was excluded from the ambit of victimhood for long. In the 1980s, a new men’s movement emerged insisting that men, too, were an unrecognised and marginalised group of victims.

This is interesting in part because there’s a tendency today to see the “alt-right” of reactionary recognition politics as a very recent phenomenon. According to Furedi, it isn’t; it’s part of the history of identity politics in general. We just thought it was
dead because, as Fraser argues, progresssive neoliberalism had attained hegemony.

Buried deep into the piece is arguable Furedi’s most controversial and pointedly written point, which is about the “therapeutic ethos” of identity politics since the 1970’s that resonates quite deeply today. The idea here is that principles from psychotherapy have become part of repertoire of left-wing activism. A prescription against “blaming the victim” transformed into a prescription towards “believing the victim”, which in turn creates a culture where only those with lived experience of a human condition may speak with authority on it. This authority is ambiguous, because it is at once both the moral authority of the victim, but also the authority one must give a therapeutic patient in describing their own experiences for the sake of their mental health.

The obligation to believe and not criticise individuals claiming victim identity is justified on therapeutic grounds. Criticism is said to constitute a form of psychological re-victimisation and therefore causes psychic wounding and mental harm. This therapeutically informed argument against the exercise of critical judgement and free speech regards criticism as an attack not just on views and opinions, but also on the person holding them. The result is censorious and illiberal. That is why in society, and especially on university campuses, it is often impossible to debate certain issues.

Furedi is concerned with how the therapeutic ethos in identity politics shuts down liberal discourse, which further erodes social solidarity which would advance political populism. In therapy, your own individual self-satisfaction and validation is the most important thing. In the politics of solidarity, this is absolutely not the case. This is a subtle critique of Fraser’s argument, which argues that progressive populism is a potentially viable counterhegemonic bloc. We could imagine a synthetic point of view, which is that progressive populism is viable but only if progressives drop the therapeutic ethos. Or, to put it another way, if “[f]rom their standpoint, any criticism of the causes promoted by identitarians is a cultural crime”, then that criminalizes the kind of discourse that’s necessary for political solidarity. That serves to advantage the neoliberal or neoconservative agenda.

This is, Furedi points out, easier to see in light of history:

Outwardly, the latest version of identity politics – which is distinguished by a synthesis of victim consciousness and concern with therapeutic validation – appears to have little in common with its 19th-century predecessor. However, in one important respect it represents a continuation of the particularist outlook and epistemology of 19th-century identitarians. Both versions insist that only those who lived in and experienced the particular culture that underpins their identity can understand their reality. In this sense, identity provides a patent on who can have a say or a voice about matters pertaining to a particular culture.

While I think they do a lot to frame the present political conditions, I don’t agree with everything in either of these articles. There are a few points of tension which I wish I knew more about.

The first is the connection made in some media today between the therapeutic needs of society’s victims and economic distributional justice. Perhaps it’s the nexus of these two political flows that makes the topic of workplace harassment and culture in its most symbolic forms such a hot topic today. It is, in a sense, the quintessential progressive neoliberal problem, in that it aligns the politics of distribution with the politics of recognition while employing the therapeutic ethos. The argument goes: since market logic is fair (the neoliberal position), if there is unfair distribution it must be because the politics of recognition are unfair (progressivism). That’s because if there is inadequate recognition, then the societal victims will feel invalidated, preventing them from asserting themselves effectively in the workplace (therapeutic ethos). To put it another way, distributional inequality is being represented as a consequence of a market externality, which is the psychological difficulty imposed by social and economic inequality. A progressive politthiics of recognition are a therapeutic intervention designed to alleviate this psychological difficulty, which corrects the meritocratic market logic.

One valid reaction to this is: so what? Furedi and Fraser are both essentially card carrying socialists. If you’re a card-carrying socialist (maybe because you have a universalist sense of distributional justice), then you might see the emphasis on workplace harassment as a distraction from a broader socialist agenda. But most people aren’t card-carrying socialist academics; most people go to work and would prefer not to be harassed.

The other thing I would like to know more about is to what extent the demands of the therapeutic ethos are a political rhetorical convenience and to what extent it is a matter of ground truth. The sweeping therapeutic progressive narrative outlined pointed out by Furedi, wherein vast swathes of society (i.e, all women, all people of color, maybe all conservatives in liberal-dominant institutions, etc.) are so structurally victimized that therapy-grade levels of validation are necessary for them to function unharmed in universities and workplaces is truly a tough pill to swallow. On the other hand, a theory of justice that discounts the genuine therapeutic needs of half the population can hardly be described as a “universalist” one.

Is there a resolution to this epistemic and political crisis? If I had to drop everything and look for one, it would be in the clinical psychological literature. What I want to know is how grounded the therapeutic ethos is in (a) scientific clinical psychology, and (b) the epidemiology of mental illness. Is it the case that structural inequality is so traumatizing (either directly or indirectly) that the fragmentation of epistemic culture is necessary as a salve for it? Or is this a political fiction? I don’t know the answer.

managerialism, continued

I’ve begun preliminary skimmings of Enteman’s Managerialism. It is a dense work of analytic philosophy, thick with argument. Sporadic summaries may not do it justice. That said, the principle of this blog is that the bar for ‘publication’ is low.

According to its introduction, Enteman’s Managerialism is written by a philosophy professor (Willard Enteman) who kept finding that the “great thinkers”–Adam Smith, Karl Marx–and the theories espoused in their writing kept getting debunked by his students. Contemporary examples showed that, contrary to conventional wisdom, the United States was not a capitalist country whose only alternative was socialism. In his observation, the United States in 1993 was neither strictly speaking capitalist, nor was it socialist. There was a theoretical gap that needed to be filled.

One of the concepts reintroduced by Enteman is Robert Dahl‘s concept of polyarchy, or “rule by many”. A polyarchy is neither a dictatorship nor a democracy, but rather is a form of government where many different people with different interests, but then again probably not everybody, is in charge. It represents some necessary but probably insufficient conditions for democracy.

This view of power seems evidently correct in most political units within the United States. Now I am wondering if I should be reading Dahl instead of Enteman. It appears that Dahl was mainly offering this political theory in contrast to a view that posited that political power was mainly held by a single dominant elite. In a polyarchy, power is held by many different kinds of elites in contest with each other. At its democratic best, these elites are responsive to citizen interests in a pluralistic way, and this works out despite the inability of most people to participate in government.

I certainly recommend the Wikipedia articles linked above. I find I’m sympathetic to this view, having come around to something like it myself but through the perhaps unlikely path of Bourdieu.

This still limits the discussion of political power in terms of the powers of particular people. Managerialism, if I’m reading it right, makes the case that individual power is not atomic but is due to organizational power. This makes sense; we can look at powerful individuals having an influence on government, but a more useful lens could look to powerful companies and civil society organizations, because these shape the incentives of the powerful people within them.

I should make a shift I’ve made just now explicit. When we talk about democracy, we are often talking about a formal government, like a sovereign nation or municipal government. But when we talk about powerful organizations in society, we are no longer just talking about elected officials and their appointees. We are talking about several different classes of organizations–businesses, civil society organizations, and governments among them–interacting with each other.

It may be that that’s all there is to it. Maybe Capitalism is an ideology that argues for more power to businesses, Socialism is an ideology that argues for more power to formal government, and Democracy is an ideology that argues for more power to civil society institutions. These are zero-sum ideologies. Managerialism would be a theory that acknowledges the tussle between these sectors at the organizational level, as opposed to at the atomic individual level.

The reason why this is a relevant perspective to engage with today is that there has probably in recent years been a transfer of power (I might say ‘control’) from government to corporations–especially Big Tech (Google, Amazon, Facebook, Apple). Frank Pasquale makes the argument for this in a recent piece. He writes and speaks with a particular policy agenda that is far better researched than this blog post. But a good deal of the work is framed around the surprise that ‘governance’ might shift to a private company in the first place. This is a framing that will always be striking to those who are invested in the politics of the state; the very word “govern” is unmarkedly used for formal government and then surprising when used to refer to something else.

Managerialism, then, may be a way of pointing to an option where more power is held by non-state actors. Crucially, though, managerialism is not the same thing as neoliberalism, because neoliberalism is based on laissez-faire market ideology and contempory information infrastructure oligopolies look nothing like laissez-faire markets! Calling the transfer of power from government to corporation today neoliberalism is quite anachronistic and misleading, really!

Perhaps managerialism, like polyarchy, is a descriptive term of a set of political conditions that does not represent an ideal, but a reality with potential to become an ideal. In that case, it’s worth investigating managerialism more carefully and determining what it is and isn’t, and why it is on the rise.