Digifesto

thinking about meritocracy in open source communities

There has been a trend in open source development culture over the past ten years or so. It is the rejection of ‘meritocracy’. Just now, I saw this Post-Meritocracy Manifesto, originally created by Coraline Ada Ehmke. It is exactly what it sounds like: an explicit rejection of meritocracy, specifically in open source development. It captures a recent progressive wing of software development culture. It is attracting signatories.

I believe this is a “trend” because I’ve noticed a more subtle expression of similar ideas a few months ago. This came up when we were coming up with a Code of Conduct for BigBang. We wound up picking the Contributor Covenant Code of Conduct, though there’s still some open questions about how to integrate it with our Governance policy.

This Contributor Covenant is widely adopted and the language of it seems good to me. I was surprised though when I found the rationale for it specifically mentioned meritocracy as a problem the code of conduct was trying to avoid:

Marginalized people also suffer some of the unintended consequences of dogmatic insistence on meritocratic principles of governance. Studies have shown that organizational cultures that value meritocracy often result in greater inequality. People with “merit” are often excused for their bad behavior in public spaces based on the value of their technical contributions. Meritocracy also naively assumes a level playing field, in which everyone has access to the same resources, free time, and common life experiences to draw upon. These factors and more make contributing to open source a daunting prospect for many people, especially women and other underrepresented people.

If it looks familiar, it may be because it was written by the same author, Coraline Ada Ehmke.

I have to admit that though I’m quite glad that we have a Code of Conduct now in BigBang, I’m uncomfortable with the ideological presumptions of its rationale and the rejection of ‘meritocracy’. There is a lot packed into this paragraph that is open to productive disagreement and which is not necessary for a commitment to the general point that harassment is bad for an open source community.

Perhaps this would be easier for me to ignore if this political framing did not mirror so many other political tensions today, and if open source governance were not something I’ve been so invested in understanding. I’ve taught a course on open source management, and BigBang spun out of that effort as an experiment in scientific analysis of open source communities. I am, I believe, deep in on this topic.

So what’s the problem? The problem is that I think there’s something painfully misaligned about criticism of meritocracy in culture at large and open source development, which is a very particular kind of organizational form. There is also perhaps a misalignment between the progressive politics of inclusion expressed in these manifestos and what many open source communities are really trying to accomplish. Surely there must be some kind of merit that is not in scare quotes, or else there would not be any good open source software to use a raise a fuss about.

Though it does not directly address the issue, I’m reminded of an old email discussion on the Numpy mailing list that I found when I was trying to do ethnographic work on the Scientific Python community. It was a response by John Hunter, the creator of Matplotlib, in response to concerns raised when Travis Oliphant, the leader of NumPy, started Continuum Analytics and there were concerns about corporate control over NumPy. Hunter quite thoughtfully, in my opinion, debunked the idea that open source governance should be a ‘democracy’, like many people assume institutions ought to be by default. After a long discussion about how Travis had great merit as a leader, he argued:

Democracy is something that many of us have grown up by default to consider as the right solution to many, if not most or, problems of governance. I believe it is a solution to a specific problem of governance. I do not believe democracy is a panacea or an ideal solution for most problems: rather it is the right solution for which the consequences of failure are too high. In a state (by which I mean a government with a power to subject its people to its will by force of arms) where the consequences of failure to submit include the death, dismemberment, or imprisonment of dissenters, democracy is a safeguard against the excesses of the powerful. Generally, there is no reason to believe that the simple majority of people polled is the “best” or “right” answer, but there is also no reason to believe that those who hold power will rule beneficiently. The democratic ability of the people to check to the rule of the few and powerful is essential to insure the survival of the minority.

In open source software development, we face none of these problems. Our power to fork is precisely the power the minority in a tyranical democracy lacks: noone will kill us for going off the reservation. We are free to use the product or not, to modify it or not, to enhance it or not.

The power to fork is not abstract: it is essential. matplotlib, and chaco, both rely *heavily* on agg, the Antigrain C++ rendering library. At some point many years ago, Maxim, the author of Agg, decided to change the license of Agg (circa version 2.5) to GPL rather than BSD. Obviously, this was a non-starter for projects like mpl, scipy and chaco which assumed BSD licensing terms. Unfortunately, Maxim had a new employer which appeared to us to be dictating the terms and our best arguments fell on deaf ears. No matter: mpl and Enthought chaco have continued to ship agg 2.4, pre-GPL, and I think that less than 1% of our users have even noticed. Yes, we forked the project, and yes, noone has noticed. To me this is the ultimate reason why governance of open source, free projects does not need to be democratic. As painful as a fork may be, it is the ultimate antidote to a leader who may not have your interests in mind. It is an antidote that we citizens in a state government may not have.

It is true that numpy exists in a privileged position in a way that matplotlib or scipy does not. Numpy is the core. Yes, Continuum is different than STScI because Travis is both the lead of Numpy and the lead of the company sponsoring numpy. These are important differences. In the worst cases, we might imagine that these differences will negatively impact numpy and associated tools. But these worst case scenarios that we imagine will most likely simply distract us from what is going on: Travis, one of the most prolific and valuable contributers to the scientific python community, has decided to refocus his efforts to do more. And that is a very happy moment for all of us.

This is a nice articulation of how forking, not voting, is the most powerful governance mechanism in open source development, and how it changes what our default assumptions about leadership ought to be. A critical but I think unacknowledged question is to how the possibility of forking interacts with the critique of meritocracy in organizations in general, and specifically what that means for community inclusiveness as a goal in open source communities. I don’t think it’s straightforward.

Advertisements

Inequality perceived through implicit factor analysis and its implications for emergent social forms

Vox published an interview with Keith Payne, author of The Broken Ladder.

My understanding is that the thesis of the book is that income inequality has a measurable effect on public health, especially certain kinds of chronic illnesses. The proposed mechanism for this effect is the psychological state of those perceiving themselves to be relatively worse off. This is a hardwired mechanism, it would seem, and one that is being turned on more and more by socioeconomic conditions today.

I’m happy to take this argument for granted until I hear otherwise. I’m interested in (and am jotting notes down here, not having read the book) the physics of this mechanism. It’s part of a larger puzzle about social forms, emergent social properties, and factor analysis that I’ve written about it some other posts.

Here’s the idea: income inequality is a very specific kind of social metric and not one that is easy to directly perceive. Measuring it from tax records, which short be straightforward, is fraught with technicalities. Therefore, it is highly implausible that direct perception of this metric is what causes the psychological impact of inequality.

Therefore, there must be one or more mediating factors between income inequality as an economic fact and psychological inequality as a mental phenomenon. Let’s suppose–because it’s actually what we should see as a ‘null hypothesis’–that there are many, many factors linking these phenomena. Some may be common causes of income inequality and psychological inequality, such as entrenched forms of social inequality that prevent equal access to resources and are internalized somehow. Others may be direct perception of the impact of inequality, such as seeing other people flying in higher class seats, or (ahem) hearing other people talk about flying at all. And yet we seem comfortable deriving from this very complex mess a generalized sense of inequality and its impact, and now that’s one of the most pressing political topics today.

I want to argue that when a person perceives inequality in a general way, they are in effect performing a kind of factor analysis on their perceptions of other people. When we compare ourselves with others, we can do so on a large number of dimensions. Cognitively, we can’t grok all of it–we have to reduce the feature space, and so we come to understand the world through a few blunt indicators that combine many other correlated data points into one.

These blunt categories can suggest that there is structure in the world that isn’t really there, but rather is an artifact of constraints on human perception and cognition. In other words, downward causation would happen in part through a dimensionality reduction of social perception.

On the other hand, if those constraints are regular enough, they may in turn impose a kind of structure on the social world (upward causation). If downward causation and upward causation reinforced each other, then that would create some stable social conditions. But there’s also no guarantee that stable social perceptions en masse track the real conditions. There may be systematic biases.

I’m not sure where this line of inquiry goes, to be honest. It needs more work.

General intelligence, social privilege, and causal inference from factor analysis

I came upon this excellent essay by Cosma Shalizi about how factor analysis has been spuriously used to support the scientific theory of General Intelligence (i.e., IQ). Shalizi, if you don’t know, is one of the best statisticians around. He writes really well and isn’t afraid to point out major blunders in things. He’s one of my favorite academics, and I don’t think I’m alone in this assessment.

First, a motive: Shalizi writes this essay because he thinks the scientific theory of General Intelligence, or a g factor that is some real property of the mind, is wrong. This theory is famous because (a) a lot of people DO believe in IQ as a real feature of the mind, and (b) a significant percentage of these people believe that IQ is hereditary and correlated with race, and (c) the ideas in (b) are used to justify pernicious and unjust social policy. Shalizi, being a principled statistician, appears to take scientific objection to (a) independently of his objection to (c), and argues persuasively that we can reject (a). How?

Shalizi’s point is that the general intelligence factor g is a latent variable that was supposedly discovered using a factor analysis of several different intelligence tests that were supposed to be independent of each other. You can take the data from these data sets and do a dimensionality reduction (that’s what factor analysis is) and get something that looks like a single factor, just as you can take a set of cars and do a dimensionality reduction and get something that looks like a single factor, “size”. The problem is that “intelligence”, just like “size”, can also be a combination of many other factors that are only indirectly associated with each other (height, length, mass, mass of specific components independent of each other, etc.). Once you have many different independent factors combining into one single reduced “dimension” of analysis, you no longer have a coherent causal story of how your general latent variable caused the phenomenon. You have, effectively, correlation without demonstrated causation and, moreover, the correlation is a construct of your data analysis method, and so isn’t really even telling you what correlations normally tell you.

To put it another way: the fact that some people seem to be generally smarter than other people can be due to thousands of independent factors that happen to combine when people apply themselves to different kinds of tasks. If some people were NOT seeming generally smarter than others, that would allow you to reject the hypothesis that there was general intelligence. But the mere presence of the aggregate phenomenon does not prove the existence of a real latent variable. In fact, Shalizi goes on to say, when you do the right kinds of tests to see if there really is a latent factor of ‘general intelligence’, you find that there isn’t any. And so it’s just the persistent and possibly motivated interpretation of the observational data that allows the stubborn myth of general intelligence to continue.

Are you following so far? If you are, it’s likely because you were already skeptical of IQ and its racial correlates to begin with. Now I’m going to switch it up though…

It is fairly common for educated people in the United States (for example) to talk about “privilege” of social groups. White privilege, male privilege–don’t tell me you haven’t at least heard of this stuff before; it is literally everywhere on the center-left news. Privilege here is considered to be a general factor that adheres in certain social groups. It is reinforced by all manner of social conditioning, especially through implicit bias in individual decision-making. This bias is so powerful it extends not to just cases of direct discrimination but also in cases where discrimination happens in a mediated way, for example through technical design. The evidence for these kinds of social privileging effects is obvious: we see inequality everywhere, and we can who is more powerful and benefited by the status quo and who isn’t.

You see where this is going now. I have the momentum. I can’t stop. Here it goes: Maybe this whole story about social privilege is as spuriously supported as the story about general intelligence? What if both narratives were over-interpretations of data that serve a political purpose, but which are not in fact based on sound causal inference techniques?

How could this be? Well, we might gather a lot of data about people: wealth, status, neighborhood, lifespan, etc. And then we could run a dimensionality reduction/factor analysis and get a significant factor that we could name “privilege” or “power”. Potentially that’s a single, real, latent variable. But also potentially it’s hundreds of independent factors spuriously combined into one. It would probably, if I had to bet on it, wind up looking a lot like the factor for “general intelligence”, which plays into the whole controversy about whether and how privilege and intelligence get confused. You must have heard the debates about, say, representation in the technical (or other high-status, high-paying) work force? One side says the smart people get hired; the other side say it’s the privileged (white male) people that get hired. Some jerk suggests that maybe the white males are smarter, and he gets fired. It’s a mess.

I’m offering you a pill right now. It’s not the red pill. It’s not the blue pill. It’s some other colored pill. Green?

There is no such thing as either general intelligence or group based social privilege. Each of these are the results of sloppy data compression over thousands of factors with a loose and subtle correlational structure. The reason why patterns of social behavior that we see are so robust against interventions is that each intervention can work against only one or two of these thousands of factors at a time. Discovering the real causal structure here is hard partly because the effect sizes are very small. Anybody with a simple explanation, especially a politically convenient explanation, is lying to you but also probably lying to themselves. We live in a complex world that resists our understanding and our actions to change it, though it can be better understood and changed through sound statistics. Most people aren’t bothering to do this, and that’s why the world is so dumb right now.

Goodbye, TheListserve!

Today I got an email I never thought I’d get: a message from the creators of TheListserve saying they were closing down the service after over 6 years.

TheListserve was a fantastic idea: it was a mailing list that allowed one person, randomly selected from the subscribers each day, to email everyone else.

It was an experiment in creating a different kind of conversational space on-line. And it worked great! Tens of thousands of subscribers, really interesting content–a space unlike most others in social media. You really did get a daily email with what some random person thought was the most interesting thing they had to say.

I was inspired enough by TheListserve to write a Twitter bot based on similar principles, TheTweetserve. Maybe the Twitter bot was also inspired by Habermas. It was not nearly as successful or interesting as TheListserve, for reasons that you could deduce if you thought about it.

Six years ago, “The Internet” was a very different imaginary. There was this idea that a lightweight intervention could capture some of the magic of serendipity that scale and connection had to offer, and that this was going to be really, really big.

It was, I guess, but then the charm wore off.

What’s happened now, I think, is that we’ve been so exposed to connection and scale that novelty has worn off. We now find ourselves exposed on-line mainly to the imposing weight of statistical aggregates and regressions to the mean. After years of messages to TheListserve, it started, somehow, to seem formulaic. You would get honest, encouraging advice, or a self-promotion. It became, after thousands of emails, a genre in itself.

I wonder if people who are younger and less jaded than I am are still finding and creating cool corners of the Internet. What I hear about more and more now are the ugly parts; they make the news. The Internet used to be full of creative chaos. Now it is so heavily instrumented and commercialized I get the sense that the next generation will see it much like I saw radio or television when I was growing up: as a medium dominated by companies, large and small. Something you had to work hard to break into as a professional choice or otherwise not at all.

“Context, Causality, and Information Flow: Implications for Privacy Engineering, Security, and Data Economics” <– My dissertation

In the last two weeks, I’ve completed, presented, and filed my dissertation, and commenced as a doctor of philosophy. In a word, I’ve PhinisheD!

The title of my dissertation is attention-grabbing, inviting, provocative, and impressive:

“Context, Causality, and Information Flow: Implications for Privacy Engineering, Security, and Data Economics”

If you’re reading this, you are probably wondering, “How can I drop everything and start reading that hot dissertation right now?”

Look no further: here is a link to the PDF.

You can also check out this slide deck from my “defense”. It covers the highlights.

I’ll be blogging about this material as I break it out into more digestible forms over time. For now, I’m obviously honored by any interest anybody takes in this work and happy to answer questions about it.

Artisanal production, productivity and automation, economic engines

I’m continuing to read Moretti’s The new geography of jobs (2012). Except for the occasional gushing over the revolutionary-ness of some new payments startup, a symptom no doubt of being so close to Silicon Valley, it continues to be an enlightening and measured read on economic change.

There are a number of useful arguments and ideas from the book, which are probably sourced more generally from economics, which I’ll outline here, with my comments:

Local, artisanal production can never substitute for large-scale manufacturing. Moretti argues that while in many places in the United States local artisinal production has cropped up, it will never replace the work done by large-scale production. Why? Because by definition, local artisinal production is (a) geographically local, and therefore unable to scale beyond a certain region, and (b) defined in part by its uniqueness, differentiating it from mainstream products. In other words, if your local small-batch shop grows to the point where it competes with large-scale production, it is no longer local and small-batch.

Interestingly, this argument about production scaling echoes work on empirical heavy tail distributions in social and economic phenomena. A world where small-scale production constituted most of production would have an exponentially bounded distribution of firm productivity. The world doesn’t look that way, and so we have very very big companies, and many many small companies, and they coexist.

Higher labor productivity in a sector results in both a richer society and fewer jobs in that sector. Productivity is how much a person’s labor produces. The idea here is that when labor productivity increases, the firm that hires those laborers needs fewer people working to satisfy its demand. But those people will be paid more, because their labor is worth more to the firm.

I think Moretti is hand-waving a bit when he argues that a society only gets richer through increased labor productivity. I don’t follow it exactly.

But I do find it interesting that Moretti calls “increases in productivity” what many others would call “automation”. Several related phenomena are viewed critically in the popular discourse on job automation: more automation causes people to lose jobs; more automation causes some people to get richer (they are higher paid); this means there is a perhaps pernicious link between automation and inequality. One aspect of this is that automation is good for capitalists. But another aspect of this is that automation is good for lucky laborers whose productivity and earnings increase as a result of automation. It’s a more nuanced story than one that is only about job loss.

The economic engine of an economy is what brings in money, it need not be the largest sector of the economy. The idea here is that for a particular (local) economy, the economic engine of that economy will be what pulls in money from outside. Moretti argues that the economic engine must be a “trade sector”, meaning a sector that trades (sells) its goods beyond its borders. It is the workers in this trade-sector economic engine that then spend their income on the “non-trade” sector of local services, which includes schoolteachers, hairdressers, personal trainers, doctors, lawyers, etc. Moretti’s book is largely about how the innovation sector is the new economic engine of many American economies.

One thing that comes to mind reading this point is that not all economic engines are engaged in commercial trade. I’m thinking about Washington, DC, and the surrounding area; the economic engine there is obviously the federal government. Another strange kind of economic engine are top-tier research universities, like Carnegie Mellon or UC Berkeley. Top-tier research universities, unlike many other forms of educational institutions, are constantly selling their degrees to foreign students. This means that they can serve as an economic engine.

Overall, Moretti’s book is a useful guide to economic geography, one that clarifies the economic causes of a number of political tensions that are often discussed in a more heated and, to me, less useful way.

References

Moretti, Enrico. The new geography of jobs. Houghton Mifflin Harcourt, 2012.

the economic construction of knowledge

We’ve all heard about the social construction of knowledge.

Here’s the story: Knowledge isn’t just in the head. Knowledge is a social construct. What we call “knowledge” is what it is because of social institutions and human interactions that sustain, communicate, and define it. Therefore all claims to absolute and unsituated knowledge are suspect.

There are many different social constructivist theories. One of the best, in my opinion, is Bourdieu’s, because he has one of the best social theories. For Bourdieu, social fields get their structure in part through the distribution of various kinds of social capital. Economic capital (money!) is one kind of social capital. Symbolic capital (the fact of having published in a peer-reviewed journal) is a different form of capital. What makes the sciences special, for Bourdieu, is that they are built around a particular mechanism for awarding symbolic capital that makes it (science) get the truth (the real truth). Bourdieu thereby harmonizes social constructivism with scientific realism, which is a huge relief for anybody trying to maintain their sanity in these trying times.

This is all super. What I’m beginning to appreciate more as I age, develop, and in some sense I suppose ‘progress’, is that economic capital is truly the trump card of all the forms of social capital, and that this point is underrated in social constructivist theories in general. What I mean by this is that flows of economic capital are a condition for the existence of the social fields (institutions, professions, etc.) in which knowledge is constructed. This is not to say that everybody engaged in the creation of knowledge is thinking about monetization all the time–to make that leap would be to commit the ecological fallacy. But at the heart of almost every institution where knowledge is created, there is somebody fundraising or selling.

Why, then, don’t we talk more about the economic construction of knowledge? It is a straightforward idea. To understand an institution or social field, you “follow the money”, seeing where it comes from and where it goes, and that allows you to situated the practice in its economic context and thereby determine its economic meaning.

Appealing economic determinism (Moretti)

I’ve start reading Enrico Moretti’s The New Geography of Jobs and finding it very clear and persuasive (though I’m not far in).

Moretti is taking up the major theme of What The Hell Is Happening To The United States, which is being addressed by so many from different angles. But whereas many writers seem to have an agenda–e.g., Noble advocating for political reform regulating algorithms; Deenan arguing for return to traditional community values in some sense; etc.–or to focus on particularly scandalous or dramatic aspects of changing political winds–such as Gilman’s work on plutocratic insurgency and collapsing racial liberalism–Moretti is doing economic geography showing how long term economic trends are shaping the distribution of prosperity within the U.S.

From the introduction, it looks like there are a few notable points.

The first is about what Moretti calls the Great Divergence, which has been going on since the 1980’s. This is the decline of U.S. manufacturing as jobs moved from Detroit, Michegan to Shenzhen, Guangdong, paired with the rise of an innovation economy where the U.S. takes the lead in high-tech and creative work. The needs of the high-tech industry–high-skilled workers, who may often be educated immigrants–changes the demographics of the innovation hubs and results in the political polarization we’re seeing on the national stage. This is an account of the economic base determining the cultural superstructure which is so fraught right now, and exactly what I was getting at yesterday with my rant yesterday about the politics of business.

The second major point Moretti makes which is probably understated in more polemical accounts of the U.S. political economy is the multiplier effect of high-skilled jobs in innovation hubs. Moretti argues that every high-paid innovation job (like software engineer or scientist) results in four other jobs in the same city. These other jobs are in service sectors that are by their nature local and not able to be exported. The consequence is that the innovation economy does not, contrary to its greatest skeptics, only benefit the wealthy minority of innovators to the ruin of the working class. However, it does move the location of working class prosperity into the same urban centers where the innovating class is.

This gives one explanation for why the backlash against Obama-era economic policies was such a shock to the coastal elites. In the locations where the “winners” of the innovation economy were gathered, there was also growth in the service economy which by objective measures increased the prosperity of the working class in those cities. The problem was the neglected working class in those other locations, who felt left behind and struck back against the changes.

A consequence of this line of reasoning is that arguments about increasing political tribalism are really a red herring. Social tribes on the Internet are a consequence, not a cause, of divisions that come from material conditions of economy and geography.

Moretti even appears to have a constructive solution in mind. He argues that there are “three Americas”: the rich innovation hubs, the poor former manufacturing centers, and mid-sized cities that have not yet gone either way. His recipe for economic success in these middle cities is attracting high-skilled workers who are a kind of keystone species for prosperous economic ecosystems.

References

Deneen, Patrick J. Why Liberalism Failed. Yale University Press, 2018.

Gilman, Nils. “The twin insurgency.” American Interest 15 (2014): 3-11.

Gilman, Nils. “The Collapse of Racial Liberalism.” The American Interest (2018).

Moretti, Enrico. The new geography of jobs. Houghton Mifflin Harcourt, 2012.

Noble, Safiya Umoja. Algorithms of Oppression: How search engines reinforce racism. NYU Press, 2018.

politics of business

This post is an attempt to articulate something that’s on the tip of my tongue, so bear with me.

Fraser has made the point that the politics of recognition and the politics of distribution are not the same. In her view, the conflict in the U.S. over recognition (i.e., or women, racial minorities, LGBTQ, etc. on the progressive side, and on the straight white male ‘majority’ on the reactionary side) has overshadowed the politics of distribution, which has been at a steady neoliberal status quo for some time.

First, it’s worth pointing out that in between these two political contests is a politics of representation, which may be more to the point. The claim here is that if a particular group is represented within a powerful organization–say, the government, or within a company with a lot of power such as a major financial institution or tech company–then that organization will use its power in a way that is responsive to the needs of the represented group.

Politics of representation are the link between recognition and distribution: the idea is that if “we” recognize a certain group, then through democratic or social processes members of that group will be lifted into positions of representative power, which then will lead to (re)distribution towards that group in the longer run.

I believe this is the implicit theory of social change at the heart of a lot of democratish movements today. It’s an interesting theory in part because it doesn’t seem to have any room for “good governance”, or broadly beneficial governance, or technocracy. There’s nothing deliberative about this form of democracy; it’s a tribal war-by-other-means. It is also not clear that this theory of social change based on demographic representation is any more effective at changing distributional outcomes than a pure politics of recognition, which we have reason to believhe is ineffectual.

Who do we expect to have power over distributional outcomes in our (and probably other) democracies? Realistically, it’s corporations. Businesses comprise most of the economic activity; businesses have the profits needed to reinvest in lobbying power for the sake of economic capture. So maybe if what we’re interested in is politics of distribution, we should stop trying to parse out the politics of recognition, with its deep dark rabbit hole of identity politics and the historical injustice and Jungian archetypal conflicts over the implications of the long arc of sexual maturity. These conversations do not seem to be getting anyone anywhere! It is, perhaps, fake news: not because the contents are fake, but because the idea that these issues are new is fake. They are perhaps just a lot of old issues stirred to conflagration by the feedback loops between social and traditional media.

If we are interested in the politics of distribution, let’s talk about something else, something that we all know must be more relevant, when it comes down to it, than the politics of recognition. I’m talking about the politics of business.

We have a rather complex economy with many competing business interests. Let’s assume that one of the things these businesses compete over is regulatory capture–their ability to influence economic policy in their favor.

When academics talk about neoliberal economic policy, they are often talking about those policies that benefit the financial sector and big businesses. But these big businesses are not always in agreement.

Take, for example, the steel tariff proposed by the Trump administration. There is no blunter example of a policy that benefits some business interests–U.S. steelmakers–and not others–U.S. manufacturers of steel-based products.

It’s important from the perspective of electoral politics to recognize that the U.S. steelmakers are a particular set of people who live in particular voting districts with certain demographics. That’s because, probably, if I am a U.S. steelworker, I will vote in the interest of my industry. Just as if I am a U.S. based urban information worker at an Internet company, I will vote in the interest of my company, which in my case would mean supporting net neutrality. If I worked for AT&T, I would vote against net neutrality, which today means I would vote Republican.

It’s an interesting fact that AT&T employs a lot more people than Google and (I believe this is the case, though I don’t know where to look up the data) that they are much more geographically distributed that Google because, you know, wires and towers and such. Which means that AT&T employees will be drawn from more rural, less diverse areas, giving them an additional allegiance to Republican identity politics.

You must see where I’m getting at. Assume that the main driver of U.S. politics is not popular will (which nobody really believes, right?) and is in fact corporate interests (which basically everybody admits, right?). In that case the politics of recognition will not be determining anything; rather it will be a symptom, an epiphenomenon, of an underlying politics of business. Immigration of high-talent foreigners then becomes a proxy issue for the economic battle between coastal tech companies and, say, old energy companies which have a much less geographically mobile labor base. Nationalism, or multinationalism, becomes a function of trade relations rather than a driving economic force in its own right. (Hence, Russia remains an enemy of the U.S. largely because Putin paid off all its debt to the U.S. and doesn’t owe it any money, unlike many of its other allies around the world.)

I would very much like to devote myself better to the understanding of politics of business because, as I’ve indicated, I think the politics of recognition have become a huge distraction.

Moral individualism and race (Barabas, Gilman, Deenan)

One of my favorite articles presented at the recent FAT* 2018 conference was Barabas et al. on “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment” (link). To me, this was the correct response to recent academic debate about the use of actuarial risk-assessment in determining criminal bail and parole rates. I had a position on this before the conference which I drafted up here; my main frustration with the debate had been that it had gone unquestioned why bail and parole rates are based on actuarial prediction of recidivism in the first place, given that rearrest rates are so contingent on social structural factors such as whether or not police are racist.

Barabas et al. point out that there’s an implicit theory of crime behind the use of actuarial risk assessments. In that theory of crime, there are individual “bad people” and “good people”. “Bad people” are more likely to commit crimes because of their individual nature, and the goal of the criminal policing system is to keep bad people from committing crimes by putting them in prison. This is the sort of theory that, even if it is a little bit true, is also deeply wrong, and so we should probably reassess the whole criminal justice system as a result. Even leaving aside the important issue of whether “recidivism” is interpreted as reoffense or rearrest rate, it is socially quite dangerous to see probability of offense as due to the specific individual moral character of a person. One reason why this is dangerous is that if the conditions for offense are correlated with the conditions for some sort of unjust desperation, then we risk falsely justifying an injustice with the idea that the bad things are only happening to bad people.

I’d like to juxtapose this position with a couple others that may on the surface appear to be in tension with it.

Nils Gilman’s new piece on “The Collapse of Racial Liberalism” is a helpful account of how we got where we are as an American polity. True to the title, Gilman’s point is that there was a centrist consensus on ‘racial liberalism’ that it reached its apotheosis in the election of Obama and then collapsed under its one contradictions, getting us where we are today.

By racial liberalism, I mean the basic consensus that existed across the mainstream of both political parties since the 1970s, to the effect that, first, bigotry of any overt sort would not be tolerated, but second, that what was intolerable was only overt bigotry—in other words, white people’s definition of racism. Institutional or “structural” racism—that is, race-based exclusions that result from deep social habits such as where people live, who they know socially, what private organizations they belong to, and so on—were not to be addressed. The core ethic of the racial liberal consensus was colorblind individualism.

Bill Clinton was good at toeing the line of racial liberalism, and Obama, as a black meritocratic elected president, was its culmination. But:

“Obama’s election marked at once the high point and the end of a particular historical cycle: a moment when the realization of a particular ideal reveals the limits of that ideal.”

The limit of the ideal is, of course, that all the things not addressed–“race-based exclusions that result from deep social habits such as where people live, who they know socially, what private organizations they belong to, and so on”–matter, and result in, for example, innocent black guys getting shot disproportionately by police even when there is a black meritocratic sitting as president.

And interesting juxtaposition here is that in both cases discussed so far, we have a case of a system that is reaching its obsolescence due to the contradictions of individualism. In the case of actuarial policing (as it is done today; I think a properly sociological version of actuarial policing could be great), there’s the problem of considering criminals as individuals whose crimes are symptoms of their individual moral character. The solution to crime is to ostracize and contain the criminals by, e.g., putting them in prison. In the case of racial liberalism, there’s the problem of considering bigotry a symptom of individual moral character. The solution to the bigotry is to ostracize and contain the bigots by teaching them that it is socially unacceptable to express bigotry and keeping the worst bigots out of respectable organizations.

Could it be that our broken theories of both crime and bigotry both have the same problem, which is the commitment to moral individualism, by which I mean the theory that it’s individual moral character that is the cause of and solution to these problems? If a case of individual crime and individual bigotry is the result of, instead of an individual moral failing, a collective action problem, what then?

I still haven’t looked carefully into Deenan’s argument (see notes here), but I’m intrigued that his point may be that the crisis of liberalism may be, at its root, a crisis of individualism. Indeed, Kantian views of individual autonomy are really nice but they have not stood the test of time; I’d say the combined works of Haberams, Foucault, and Bourdieu have each from very different directions developed Kantian ideas into a more sociological frame. And that’s just on the continental grand theory side of the equation. I have not followed up on what Anglophone liberal theory has been doing, but I suspect that it has been going the same way.

I am wary, as I always am, of giving too much credit to theory. I know, as somebody who has read altogether too much of it, what little use it actually is. However, the notion of political and social consensus is one that tangibly effects my life these days. For this reason, it’s a topic of great personal interest.

One last point, that’s intended as constructive. It’s been argued that the appeal of individualism is due in part to the methodological individualism of rational choice theory and neoclassical economic theory. Because we can’t model economic interactions on anything but an individualistic level, we can’t design mechanisms or institutions that treat individual activity as a function of social form. This is another good reason to take seriously computational modeling of social forms.

References

Barabas, Chelsea, et al. “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment.” arXiv preprint arXiv:1712.08238 (2017).

Deneen, Patrick J. Why Liberalism Failed. Yale University Press, 2018.

Gilman, Nils. “The Collapse of Racial Liberalism.” The American Interest (2018).