Digifesto

Category: politics

politics of business

This post is an attempt to articulate something that’s on the tip of my tongue, so bear with me.

Fraser has made the point that the politics of recognition and the politics of distribution are not the same. In her view, the conflict in the U.S. over recognition (i.e., or women, racial minorities, LGBTQ, etc. on the progressive side, and on the straight white male ‘majority’ on the reactionary side) has overshadowed the politics of distribution, which has been at a steady neoliberal status quo for some time.

First, it’s worth pointing out that in between these two political contests is a politics of representation, which may be more to the point. The claim here is that if a particular group is represented within a powerful organization–say, the government, or within a company with a lot of power such as a major financial institution or tech company–then that organization will use its power in a way that is responsive to the needs of the represented group.

Politics of representation are the link between recognition and distribution: the idea is that if “we” recognize a certain group, then through democratic or social processes members of that group will be lifted into positions of representative power, which then will lead to (re)distribution towards that group in the longer run.

I believe this is the implicit theory of social change at the heart of a lot of democratish movements today. It’s an interesting theory in part because it doesn’t seem to have any room for “good governance”, or broadly beneficial governance, or technocracy. There’s nothing deliberative about this form of democracy; it’s a tribal war-by-other-means. It is also not clear that this theory of social change based on demographic representation is any more effective at changing distributional outcomes than a pure politics of recognition, which we have reason to believhe is ineffectual.

Who do we expect to have power over distributional outcomes in our (and probably other) democracies? Realistically, it’s corporations. Businesses comprise most of the economic activity; businesses have the profits needed to reinvest in lobbying power for the sake of economic capture. So maybe if what we’re interested in is politics of distribution, we should stop trying to parse out the politics of recognition, with its deep dark rabbit hole of identity politics and the historical injustice and Jungian archetypal conflicts over the implications of the long arc of sexual maturity. These conversations do not seem to be getting anyone anywhere! It is, perhaps, fake news: not because the contents are fake, but because the idea that these issues are new is fake. They are perhaps just a lot of old issues stirred to conflagration by the feedback loops between social and traditional media.

If we are interested in the politics of distribution, let’s talk about something else, something that we all know must be more relevant, when it comes down to it, than the politics of recognition. I’m talking about the politics of business.

We have a rather complex economy with many competing business interests. Let’s assume that one of the things these businesses compete over is regulatory capture–their ability to influence economic policy in their favor.

When academics talk about neoliberal economic policy, they are often talking about those policies that benefit the financial sector and big businesses. But these big businesses are not always in agreement.

Take, for example, the steel tariff proposed by the Trump administration. There is no blunter example of a policy that benefits some business interests–U.S. steelmakers–and not others–U.S. manufacturers of steel-based products.

It’s important from the perspective of electoral politics to recognize that the U.S. steelmakers are a particular set of people who live in particular voting districts with certain demographics. That’s because, probably, if I am a U.S. steelworker, I will vote in the interest of my industry. Just as if I am a U.S. based urban information worker at an Internet company, I will vote in the interest of my company, which in my case would mean supporting net neutrality. If I worked for AT&T, I would vote against net neutrality, which today means I would vote Republican.

It’s an interesting fact that AT&T employs a lot more people than Google and (I believe this is the case, though I don’t know where to look up the data) that they are much more geographically distributed that Google because, you know, wires and towers and such. Which means that AT&T employees will be drawn from more rural, less diverse areas, giving them an additional allegiance to Republican identity politics.

You must see where I’m getting at. Assume that the main driver of U.S. politics is not popular will (which nobody really believes, right?) and is in fact corporate interests (which basically everybody admits, right?). In that case the politics of recognition will not be determining anything; rather it will be a symptom, an epiphenomenon, of an underlying politics of business. Immigration of high-talent foreigners then becomes a proxy issue for the economic battle between coastal tech companies and, say, old energy companies which have a much less geographically mobile labor base. Nationalism, or multinationalism, becomes a function of trade relations rather than a driving economic force in its own right. (Hence, Russia remains an enemy of the U.S. largely because Putin paid off all its debt to the U.S. and doesn’t owe it any money, unlike many of its other allies around the world.)

I would very much like to devote myself better to the understanding of politics of business because, as I’ve indicated, I think the politics of recognition have become a huge distraction.

Advertisements

Moral individualism and race (Barabas, Gilman, Deenan)

One of my favorite articles presented at the recent FAT* 2018 conference was Barabas et al. on “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment” (link). To me, this was the correct response to recent academic debate about the use of actuarial risk-assessment in determining criminal bail and parole rates. I had a position on this before the conference which I drafted up here; my main frustration with the debate had been that it had gone unquestioned why bail and parole rates are based on actuarial prediction of recidivism in the first place, given that rearrest rates are so contingent on social structural factors such as whether or not police are racist.

Barabas et al. point out that there’s an implicit theory of crime behind the use of actuarial risk assessments. In that theory of crime, there are individual “bad people” and “good people”. “Bad people” are more likely to commit crimes because of their individual nature, and the goal of the criminal policing system is to keep bad people from committing crimes by putting them in prison. This is the sort of theory that, even if it is a little bit true, is also deeply wrong, and so we should probably reassess the whole criminal justice system as a result. Even leaving aside the important issue of whether “recidivism” is interpreted as reoffense or rearrest rate, it is socially quite dangerous to see probability of offense as due to the specific individual moral character of a person. One reason why this is dangerous is that if the conditions for offense are correlated with the conditions for some sort of unjust desperation, then we risk falsely justifying an injustice with the idea that the bad things are only happening to bad people.

I’d like to juxtapose this position with a couple others that may on the surface appear to be in tension with it.

Nils Gilman’s new piece on “The Collapse of Racial Liberalism” is a helpful account of how we got where we are as an American polity. True to the title, Gilman’s point is that there was a centrist consensus on ‘racial liberalism’ that it reached its apotheosis in the election of Obama and then collapsed under its one contradictions, getting us where we are today.

By racial liberalism, I mean the basic consensus that existed across the mainstream of both political parties since the 1970s, to the effect that, first, bigotry of any overt sort would not be tolerated, but second, that what was intolerable was only overt bigotry—in other words, white people’s definition of racism. Institutional or “structural” racism—that is, race-based exclusions that result from deep social habits such as where people live, who they know socially, what private organizations they belong to, and so on—were not to be addressed. The core ethic of the racial liberal consensus was colorblind individualism.

Bill Clinton was good at toeing the line of racial liberalism, and Obama, as a black meritocratic elected president, was its culmination. But:

“Obama’s election marked at once the high point and the end of a particular historical cycle: a moment when the realization of a particular ideal reveals the limits of that ideal.”

The limit of the ideal is, of course, that all the things not addressed–“race-based exclusions that result from deep social habits such as where people live, who they know socially, what private organizations they belong to, and so on”–matter, and result in, for example, innocent black guys getting shot disproportionately by police even when there is a black meritocratic sitting as president.

And interesting juxtaposition here is that in both cases discussed so far, we have a case of a system that is reaching its obsolescence due to the contradictions of individualism. In the case of actuarial policing (as it is done today; I think a properly sociological version of actuarial policing could be great), there’s the problem of considering criminals as individuals whose crimes are symptoms of their individual moral character. The solution to crime is to ostracize and contain the criminals by, e.g., putting them in prison. In the case of racial liberalism, there’s the problem of considering bigotry a symptom of individual moral character. The solution to the bigotry is to ostracize and contain the bigots by teaching them that it is socially unacceptable to express bigotry and keeping the worst bigots out of respectable organizations.

Could it be that our broken theories of both crime and bigotry both have the same problem, which is the commitment to moral individualism, by which I mean the theory that it’s individual moral character that is the cause of and solution to these problems? If a case of individual crime and individual bigotry is the result of, instead of an individual moral failing, a collective action problem, what then?

I still haven’t looked carefully into Deenan’s argument (see notes here), but I’m intrigued that his point may be that the crisis of liberalism may be, at its root, a crisis of individualism. Indeed, Kantian views of individual autonomy are really nice but they have not stood the test of time; I’d say the combined works of Haberams, Foucault, and Bourdieu have each from very different directions developed Kantian ideas into a more sociological frame. And that’s just on the continental grand theory side of the equation. I have not followed up on what Anglophone liberal theory has been doing, but I suspect that it has been going the same way.

I am wary, as I always am, of giving too much credit to theory. I know, as somebody who has read altogether too much of it, what little use it actually is. However, the notion of political and social consensus is one that tangibly effects my life these days. For this reason, it’s a topic of great personal interest.

One last point, that’s intended as constructive. It’s been argued that the appeal of individualism is due in part to the methodological individualism of rational choice theory and neoclassical economic theory. Because we can’t model economic interactions on anything but an individualistic level, we can’t design mechanisms or institutions that treat individual activity as a function of social form. This is another good reason to take seriously computational modeling of social forms.

References

Barabas, Chelsea, et al. “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment.” arXiv preprint arXiv:1712.08238 (2017).

Deneen, Patrick J. Why Liberalism Failed. Yale University Press, 2018.

Gilman, Nils. “The Collapse of Racial Liberalism.” The American Interest (2018).

Notes on Deenan, “Why Liberalism Failed”, Foreward

I’ve begun reading the recently published book, Why Liberalism Failed (2018), by Patrick Deenan. It appears to be making some waves in the political theory commentary. The author claims that it was 10 years in the making but was finished three weeks before the 2016 presidential election, which suggests that the argument within it is prescient.

I’m not far in yet.

There is an intriguing forward from James Davison Hunter and John M. Owen IV, the editors. Their framing of the book is surprisingly continental:

  • They declare that liberalism has arrived at its “legitimacy crisis”, a Habermasian term.
  • They claim that the core contention of the book is a critique of the contradictions within Immanuel Kant’s view of individual autonomy.
  • They compare Deenan with other “radical” critics of liberalism, of which they name: Marx, the Frankfurt School, Foucault, Nietzsche, Schmitt, and the Catholic Church.

In search of a litmus-test like clue as to where in the political spectrum the book falls, I’ve found this passage in the Foreward:

Deneen’s book is disruptive not only for the way it links social maladies to liberalism’s first principles, but also because it is difficult to categorize along our conventional left-right spectrum. Much of what he writes will cheer social democrats and anger free-market advocates; much else will hearten traditionalists and alienate social progressives.

Well, well, well. If we are to fit Deenan’s book into the conceptual 2-by-2 provided in Fraser’s recent work, it appears that Deenan’s political theory is a form of reactionary populism, rejecting progressive neoliberalism. In other words, the Foreward evinces that Deenan’s book is a high-brow political theory contribution that weighs in favor of the kind of politics that has been heretofore only articulated by intellectual pariahs.

The therapeutic ethos in progressive neoliberalism (Fraser and Furedi)

I’ve read two pieces recently that I found helpful in understanding today’s politics, especially today’s identity politics, in a larger context.

The first is Nancy Fraser’s “From Progressive Neoliberalism to Trump–and Beyond” (link). It portrays the present (American but also global) political moment as a “crisis of hegemony”, using Gramscian terms, for which the presidency of Donald Trump is a poster child. It’s main contribution is to point out that the hegemony that’s been in crisis is a hegemony of progressive neoliberalism, which sounds like an oxymoron but, Fraser argues, isn’t.

Rather, Fraser explains a two-dimensional political spectrum: there are politics of distribution, and there are politics of recognition.

To these ideas of Gramsci, we must add one more. Every hegemonic bloc embodies a set of assumptions about what is just and right and what is not. Since at least the mid-twentieth century in the United States and Europe, capitalist hegemony has been forged by combining two different aspects of right and justice—one focused on distribution, the other on recognition. he distributive aspect conveys a view about how society should allocate divisible goods, especially income. This aspect speaks to the economic structure of society and, however obliquely, to its class divisions. The recognition aspect expresses a sense of how society should apportion respect and esteem, the moral marks of membership and belonging. Focused on the status order of society, this aspect refers to its status hierarchies.

Fraser’s argument is that neoliberalism is a politics of distribution–it’s about using the market to distribute goods. I’m just going to assume that anybody reading this has a working knowledge of what neoliberalism means; if you don’t I recommend reading Fraser’s article about it. Progressivism is a politics of recognition that was advanced by the New Democrats. Part of its political potency been its consistency with neoliberalism:

At the core of this ethos were ideals of “diversity,” women’s “empowerment,” and LGBTQ rights; post-racialism, multiculturalism, and environmentalism. These ideals were interpreted in a specific, limited way that was fully compatible with the Goldman Sachsification of the U.S. economy…. The progressive-neoliberal program for a just status order did not aim to abolish social hierarchy but to “diversify” it, “empowering” “talented” women, people of color, and sexual minorities to rise to the top. And that ideal was inherently class specific: geared to ensuring that “deserving” individuals from “underrepresented groups” could attain positions and pay on a par with the straight white men of their own class.

A less academic, more Wall Street Journal reading member of the commentariat might be more comfortable with the terms “fiscal conservativism” and “social liberalism”. And indeed, Fraser’s argument seems mainly to be that the hegemony of the Obama era was fiscally conservatism but socially liberal. In a sense, it was the true libertarians that were winning, which is an interesting take I hadn’t heard before.

The problem, from Frasers perspective, is that neoliberalism concentrates wealth and carries the seeds of its own revolution, allowing for Trump to run on a combination of reactionary politics of recognition (social conservativism) with a populist politics of distribution (economic liberalism: big spending and protectionism). He won, and then sold out to neoliberalism, giving us the currently prevailing combination of neoliberalism and reactionary social policy. Which, by the way, we would be calling neoconservatism if it were 15 years ago. Maybe it’s time to resuscitate this term.

Fraser thinks the world would be a better place if progressive populists could establish themselves as an effective counterhegemonic bloc.

The second piece I’ve read on this recently is Frank Furedi’s “The hidden history of t identity politics” (link). Pairing Fraser with Furedi is perhaps unlikely because, to put it bluntly, Fraser is a feminist and Furedi, as far as I can tell from this one piece, isn’t. However, both are serious social historians and there’s a lot of overlap in the stories they tell. That is in itself interesting from a scholarly perspective of one trying to triangulate an accurate account of political history.

Furedi’s piece is about “identity politics” broadly, including both its right wing and left wing incarnations. So, we’re talking about what Fraser calls the politics of recognition here. On a first pass, Furedi’s point is that Enlightenment universalist values have been challenged by both right and left wing identity politics since the late 18th century Romantic nationalist movements in Europe, which led to World Wars and the holocaust. Maybe, Furedi’s piece suggests, abandoning Enlightenment universalist values was a bad idea.

Although expressed through a radical rhetoric of liberation and empowerment, the shift towards identity politics was conservative in impulse. It was a sensibility that celebrated the particular and which regarded the aspiration for universal values with suspicion. Hence the politics of identity focused on the consciousness of the self and on how the self was perceived. Identity politics was, and continues to be, the politics of ‘it’s all about me’.

Strikingly, Furedi’s argument is that the left took the “cultural turn” into recognition politics essentially because of its inability to maintain a left-wing politics of redistribution, and that this happened in the 70’s. But this in turn undermined the cause of the economic left. Why? Because economic populism requires social solidarity, while identity politics is necessarily a politics of difference. Solidarity within an identity group can cause gains for that identity group, but at the expense of political gains that could be won with an even more unified popular political force.

The emergence of different identity-based groups during the 1970s mirrored the lowering of expectations on the part of the left. This new sensibility was most strikingly expressed by the so-called ‘cultural turn’ of the left. The focus on the politics of culture, on image and representation, distracted the left from its traditional interest in social solidarity. And the most significant feature of the cultural turn was its sacralisation of identity. The ideals of difference and diversity had displaced those of human solidarity.

So far, Furedi is in agreement with Fraser that hegemonic neoliberalism has been the status quo since the 70’s, and that the main political battles have been over identity recognition. Furedi’s point, which I find interesting, is that these battles over identity recognition undermine the cause of economic populism. In short, neoliberals and neocons can use identity to divide and conquer their shared political opponents and keep things as neo- as possible.

This is all rather old news, though a nice schematic representation of it.

Where Furedi’s piece gets interesting is where it draws out the next movements in identity politics, which he describes as the shift from it being about political and economic conditions into a politics of first victimhood and then a specific therapeutic ethos.

The victimhood move grounded the politics of recognition in the authoritative status of the victim. While originally used for progresssive purposes, this move was adopted outside of the progressive movement as early as 1980’s.

A pervasive sense of victimisation was probably the most distinct cultural legacy of this era. The authority of the victim was ascendant. Sections of both the left and the right endorsed the legitimacy of the victim’s authoritative status. This meant that victimhood became an important cultural resource for identity construction. At times it seemed that everyone wanted to embrace the victim label. Competitive victimhood quickly led to attempts to create a hierarchy of victims. According to a study by an American sociologist, the different movements joined in an informal way to ‘generate a common mood of victimisation, moral indignation, and a self-righteous hostility against the common enemy – the white male’ (5). Not that the white male was excluded from the ambit of victimhood for long. In the 1980s, a new men’s movement emerged insisting that men, too, were an unrecognised and marginalised group of victims.

This is interesting in part because there’s a tendency today to see the “alt-right” of reactionary recognition politics as a very recent phenomenon. According to Furedi, it isn’t; it’s part of the history of identity politics in general. We just thought it was
dead because, as Fraser argues, progresssive neoliberalism had attained hegemony.

Buried deep into the piece is arguable Furedi’s most controversial and pointedly written point, which is about the “therapeutic ethos” of identity politics since the 1970’s that resonates quite deeply today. The idea here is that principles from psychotherapy have become part of repertoire of left-wing activism. A prescription against “blaming the victim” transformed into a prescription towards “believing the victim”, which in turn creates a culture where only those with lived experience of a human condition may speak with authority on it. This authority is ambiguous, because it is at once both the moral authority of the victim, but also the authority one must give a therapeutic patient in describing their own experiences for the sake of their mental health.

The obligation to believe and not criticise individuals claiming victim identity is justified on therapeutic grounds. Criticism is said to constitute a form of psychological re-victimisation and therefore causes psychic wounding and mental harm. This therapeutically informed argument against the exercise of critical judgement and free speech regards criticism as an attack not just on views and opinions, but also on the person holding them. The result is censorious and illiberal. That is why in society, and especially on university campuses, it is often impossible to debate certain issues.

Furedi is concerned with how the therapeutic ethos in identity politics shuts down liberal discourse, which further erodes social solidarity which would advance political populism. In therapy, your own individual self-satisfaction and validation is the most important thing. In the politics of solidarity, this is absolutely not the case. This is a subtle critique of Fraser’s argument, which argues that progressive populism is a potentially viable counterhegemonic bloc. We could imagine a synthetic point of view, which is that progressive populism is viable but only if progressives drop the therapeutic ethos. Or, to put it another way, if “[f]rom their standpoint, any criticism of the causes promoted by identitarians is a cultural crime”, then that criminalizes the kind of discourse that’s necessary for political solidarity. That serves to advantage the neoliberal or neoconservative agenda.

This is, Furedi points out, easier to see in light of history:

Outwardly, the latest version of identity politics – which is distinguished by a synthesis of victim consciousness and concern with therapeutic validation – appears to have little in common with its 19th-century predecessor. However, in one important respect it represents a continuation of the particularist outlook and epistemology of 19th-century identitarians. Both versions insist that only those who lived in and experienced the particular culture that underpins their identity can understand their reality. In this sense, identity provides a patent on who can have a say or a voice about matters pertaining to a particular culture.

While I think they do a lot to frame the present political conditions, I don’t agree with everything in either of these articles. There are a few points of tension which I wish I knew more about.

The first is the connection made in some media today between the therapeutic needs of society’s victims and economic distributional justice. Perhaps it’s the nexus of these two political flows that makes the topic of workplace harassment and culture in its most symbolic forms such a hot topic today. It is, in a sense, the quintessential progressive neoliberal problem, in that it aligns the politics of distribution with the politics of recognition while employing the therapeutic ethos. The argument goes: since market logic is fair (the neoliberal position), if there is unfair distribution it must be because the politics of recognition are unfair (progressivism). That’s because if there is inadequate recognition, then the societal victims will feel invalidated, preventing them from asserting themselves effectively in the workplace (therapeutic ethos). To put it another way, distributional inequality is being represented as a consequence of a market externality, which is the psychological difficulty imposed by social and economic inequality. A progressive politthiics of recognition are a therapeutic intervention designed to alleviate this psychological difficulty, which corrects the meritocratic market logic.

One valid reaction to this is: so what? Furedi and Fraser are both essentially card carrying socialists. If you’re a card-carrying socialist (maybe because you have a universalist sense of distributional justice), then you might see the emphasis on workplace harassment as a distraction from a broader socialist agenda. But most people aren’t card-carrying socialist academics; most people go to work and would prefer not to be harassed.

The other thing I would like to know more about is to what extent the demands of the therapeutic ethos are a political rhetorical convenience and to what extent it is a matter of ground truth. The sweeping therapeutic progressive narrative outlined pointed out by Furedi, wherein vast swathes of society (i.e, all women, all people of color, maybe all conservatives in liberal-dominant institutions, etc.) are so structurally victimized that therapy-grade levels of validation are necessary for them to function unharmed in universities and workplaces is truly a tough pill to swallow. On the other hand, a theory of justice that discounts the genuine therapeutic needs of half the population can hardly be described as a “universalist” one.

Is there a resolution to this epistemic and political crisis? If I had to drop everything and look for one, it would be in the clinical psychological literature. What I want to know is how grounded the therapeutic ethos is in (a) scientific clinical psychology, and (b) the epidemiology of mental illness. Is it the case that structural inequality is so traumatizing (either directly or indirectly) that the fragmentation of epistemic culture is necessary as a salve for it? Or is this a political fiction? I don’t know the answer.

managerialism, continued

I’ve begun preliminary skimmings of Enteman’s Managerialism. It is a dense work of analytic philosophy, thick with argument. Sporadic summaries may not do it justice. That said, the principle of this blog is that the bar for ‘publication’ is low.

According to its introduction, Enteman’s Managerialism is written by a philosophy professor (Willard Enteman) who kept finding that the “great thinkers”–Adam Smith, Karl Marx–and the theories espoused in their writing kept getting debunked by his students. Contemporary examples showed that, contrary to conventional wisdom, the United States was not a capitalist country whose only alternative was socialism. In his observation, the United States in 1993 was neither strictly speaking capitalist, nor was it socialist. There was a theoretical gap that needed to be filled.

One of the concepts reintroduced by Enteman is Robert Dahl‘s concept of polyarchy, or “rule by many”. A polyarchy is neither a dictatorship nor a democracy, but rather is a form of government where many different people with different interests, but then again probably not everybody, is in charge. It represents some necessary but probably insufficient conditions for democracy.

This view of power seems evidently correct in most political units within the United States. Now I am wondering if I should be reading Dahl instead of Enteman. It appears that Dahl was mainly offering this political theory in contrast to a view that posited that political power was mainly held by a single dominant elite. In a polyarchy, power is held by many different kinds of elites in contest with each other. At its democratic best, these elites are responsive to citizen interests in a pluralistic way, and this works out despite the inability of most people to participate in government.

I certainly recommend the Wikipedia articles linked above. I find I’m sympathetic to this view, having come around to something like it myself but through the perhaps unlikely path of Bourdieu.

This still limits the discussion of political power in terms of the powers of particular people. Managerialism, if I’m reading it right, makes the case that individual power is not atomic but is due to organizational power. This makes sense; we can look at powerful individuals having an influence on government, but a more useful lens could look to powerful companies and civil society organizations, because these shape the incentives of the powerful people within them.

I should make a shift I’ve made just now explicit. When we talk about democracy, we are often talking about a formal government, like a sovereign nation or municipal government. But when we talk about powerful organizations in society, we are no longer just talking about elected officials and their appointees. We are talking about several different classes of organizations–businesses, civil society organizations, and governments among them–interacting with each other.

It may be that that’s all there is to it. Maybe Capitalism is an ideology that argues for more power to businesses, Socialism is an ideology that argues for more power to formal government, and Democracy is an ideology that argues for more power to civil society institutions. These are zero-sum ideologies. Managerialism would be a theory that acknowledges the tussle between these sectors at the organizational level, as opposed to at the atomic individual level.

The reason why this is a relevant perspective to engage with today is that there has probably in recent years been a transfer of power (I might say ‘control’) from government to corporations–especially Big Tech (Google, Amazon, Facebook, Apple). Frank Pasquale makes the argument for this in a recent piece. He writes and speaks with a particular policy agenda that is far better researched than this blog post. But a good deal of the work is framed around the surprise that ‘governance’ might shift to a private company in the first place. This is a framing that will always be striking to those who are invested in the politics of the state; the very word “govern” is unmarkedly used for formal government and then surprising when used to refer to something else.

Managerialism, then, may be a way of pointing to an option where more power is held by non-state actors. Crucially, though, managerialism is not the same thing as neoliberalism, because neoliberalism is based on laissez-faire market ideology and contempory information infrastructure oligopolies look nothing like laissez-faire markets! Calling the transfer of power from government to corporation today neoliberalism is quite anachronistic and misleading, really!

Perhaps managerialism, like polyarchy, is a descriptive term of a set of political conditions that does not represent an ideal, but a reality with potential to become an ideal. In that case, it’s worth investigating managerialism more carefully and determining what it is and isn’t, and why it is on the rise.

beginning Enteman’s Managerialism

I’ve been writing about managerialism without having done my homework.

Today I got a new book in the mail, Willard Enteman’s Managerialism: The Emergence of a New Ideology, a work of analytic political philosophy that came out in 1993. The gist of the book is that none of the dominant world ideologies of the time–capitalism, socialism, and democracy–actually describe the world as it functions.

Enter Enteman’s managerialism, which considers a society composed of organizations, not individuals, and social decisions as a consequence of the decisions of organizational managers.

It’s striking that this political theory has been around for so long, though it is perhaps more relevant today because of large digital platforms.

mathematical discourse vs. exit; blockchain applications

Continuing my effort to tie together the work on this blog into a single theory, I should address the theme of an old post that I’d forgotten about.

The post discusses the discourse theory of law, attributed to the later, matured Habermas. According to it, the law serves as a transmission belt between legitimate norms established by civil society and a system of power, money, and technology. When it is efficacious and legitimate, society prospers in its legitimacy. The blog post toys with the idea of normatively aligned algorithm law established in a similar way: through the norms established by civil society.

I wrote about this in 2014 and I’m surprised to find myself revisiting these themes in my work today on privacy by design.

What this requires, however, is that civil society must be able to engage in mathematical discourse, or mathematized discussion of norms. In other words, there has to be an intersection of civil society and science for this to make sense. I’m reminded by how inspired I’ve felt by Nick Doty’s work on multistakerholderism in Internet standards as a model.

I am more skeptical of this model than I have been before, if only because in the short term I’m unsure if a critical mass of scientific talent can engage with civil society well enough to change the law. This is because scientific talent is a form of capital which has no clear incentive for self-regulation. Relatedly, I’m no longer as confident that civil society carries enough clout to change policy. I must consider other options.

The other option, besides voicing ones concerns in civil society, is, of course, exit, in Hirschmann‘s sense. Theoretically an autonomous algorithmic law could be designed such that it encourages exit from other systems into itself. Or, more ecologically, competing autonomous (or decentralized, …) systems can be regulated by an exit mechanism. This is in fact what happens now with blockchain technology and cryptocurrency. Whenever there is a major failure of one of these currencies, there is a fork.

Recap

Sometimes traffic on this blog draws attention to an old post from years ago. This can be a reminder that I’ve been repeating myself, encountering the same themes over and over again. This is not necessarily a bad thing, because I hope to one day compile the ideas from this blog into a book. It’s nice to see what points keep resurfacing.

One of these points is that liberalism assumes equality, but this challenged by society’s need for control structures, which creates inequality, which then undermines liberalism. This post calls in Charles Taylor (writing about Hegel!) to make the point. This post makes the point more succinctly. I’ve been drawing on Beniger for the ‘society needs control to manage its own integration’ thesis. I’ve pointed to the term managerialism as referring to an alternative to liberalism based on the acknowledgement of this need for control structures. Managerialism looks a lot like liberalism, it turns out, but it justifies things on different grounds and does not get so confused. As an alternative, more Bourdieusian view of the problem, I consider the relationship between capital, democracy, and oligarchy here. There are some useful names for what happens when managerialism goes wrong and people seem disconnected from each other–anomie–or from the control structures–alienation.

A related point I’ve made repeatedly is the tension between procedural legitimacy and getting people the substantive results that they want. That post about Hegel goes into this. But it comes up again in very recent work on antidiscrimination law and machine learning. What this amounts to is that attempts to come up with a fair, legitimate procedure are going to divide up the “pie” of resources, or be perceived to divide up the pie of resources, somehow, and people are going to be upset about it, however the pie is sliced.

A related theme that comes up frequently is mathematics. My contention is that effective control is a technical accomplishment that is mathematically optimized and constrained. There are mathematical results that reveal necessary trade-offs between values. Data science has been misunderstood as positivism when in fact it is a means of power. Technical knowledge and technology are forms of capital (Bourdieu again). Perhaps precisely because it is a rare form of capital, science is politically distrusted.

To put it succinctly: lack of mathematics education, due to lack of opportunity or mathophobia, lead to alienation and anomie in an economy of control. This is partly reflected in the chaotic disciplinarity of the social sciences, especially as they react to computational social science, at the intersection of social sciences, statistics, and computer science.

Lest this all seem like an argument for the mathematical certitude of totalitarianism, I have elsewhere considered and rejected this possibility of ‘instrumentality run amok‘. I’ve summarized these arguments here, though this appears to have left a number of people unconvinced. I’ve argued this further, and think there’s more to this story (a formalization of Scott’s arguments from Seeing Like a State, perhaps), but I must admit I don’t have a convincing solution to the “control problem” yet. However, it must be noted that the answer to the control problem is an empirical or scientific prediction, not a political inclination. Whether or not it is the most interesting or important question regarding technological control has been debated to a stalemate, as far as I can tell.

As I don’t believe singleton control is a likely or interesting scenario, I’m more interested in practical ways of offering legitimacy or resistance to control structures. I used to think the “right” political solution was a kind of “hacker class consciousness“; I don’t believe this any more. However, I still think there’s a lot to the idea of recursive publics as actually existing alternative power structures. Platform coops are interesting for the same reason.

All this leads me to admit my interest in the disruptive technology du jour, the blockchain.

On achieving social equality

When evaluating a system, we have a choice of evaluating its internal functions–the inside view–or evaluating its effects situated in a larger context–the outside view.

Decision procedures (whether they are embodied by people or performed in concert with mechanical devices–I don’t think this distinction matters here) for sorting people are just such a system. If I understand correctly, the question of which principles animate antidiscrimination law hinge on this difference between the inside and outside view.

We can look at a decision-making process and evaluate whether as a procedure it achieves its goals of e.g. assigning credit scores without bias against certain groups. Even including processes of the gathering of evidence or data in such a system, it can in principle be bounded and evaluated by its ability to perform its goals. We do seem to care about the difference between procedural discrimination and procedural nondiscrimination. For example, an overtly racist policy that ignores truly talent and opportunity seems worse than a bureaucratic system that is indifferent to external inequality between groups that then gets reflected in decisions made according to other factors that are merely correlated with race.

The latter case has been criticized in the outside view. The criticism is captured by the phrasing that “algorithms can reproduce existing biases”. The supposedly neutral algorithm (which can, again, be either human or machine) is not neutral in its impact because in making its considerations of e.g. business interest are indifferent to the conditions outside it. The business is attracted to wealth and opportunity, which are held disproportionately by some part of the population, so the business is attracted to that population.

There is great wisdom in recognizing that institutions that are neutral in their inside view will often reproduce bias in the outside view. But it is incorrect to therefore conflate neutrality in the inside view with a biased inside view, even though their effects may be under some circumstances the same. When I say it is “incorrect”, I mean that they are in fact different because, for example, if the external conditions of procedurally neutral institution change, then it will reflect those new conditions. A procedurally biased institution will not reflect those new conditions in the same way.

Empirically it is very hard to tell when an institution is being procedurally neutral and indeed this is the crux of an enormous amount of political tension today. The first line of defense of an institution blamed of bias is to claim that their procedural neutrality is merely reflecting environmental conditions outside of its control. This is unconvincing for many politically active people. It seems to me that it is now much more common for institutions to avoid this problem by explicitly declaring their bias. Rather than try to accomplish the seemingly impossible task of defending their rigorous neutrality, it’s easier to declare where one stands on the issue of resource allocation globally and adjust ones procedure accordingly.

I don’t think this is a good thing.

One consequence of evaluating all institutions based on their global, “systemic” impact as opposed to their procedural neutrality is that it hollows out the political center. The evidence is in that politics has become more and more polarized. This is inevitable if politics becomes so explicitly about maintaining or reallocating resources as opposed to about building neutrally legitimate institutions. When one party in Congress considers a tax bill which seems designed mainly to enrich ones own constituencies at the expense of the other’s things have gotten out of hand. The idea of a unified idea of ‘good government’ has been all but abandoned.

An alternative is a commitment to procedural neutrality in the inside view of institutions, or at least some institutions. The fact that there are many different institutions that may have different policies is indeed quite relevant here. For while it is commonplace to say that a neutral institution will “reproduce existing biases”, “reproduction” is not a particularly helpful word here. Neither is “bias”. What we can say more precisely is that the operations of procedurally neutral institution will not change the distribution of resources even though they are unequal.

But if we do not hold all institutions accountable for correcting the inequality of society, isn’t that the same thing as approving of the status quo, which is so unequal? A thousand times no.

First, there’s the problem that many institutions are not, currently, procedurally neutral. Procedural neutrality is a higher standard than what many institutions are currently held to. Consider what is widely known about human beings and their implicit biases. One good argument for transferring decision-making authority to machine learning algorithms, even standard ones not augmented for ‘fairness’, is that they will not have the same implicit, inside, biases as the humans that currently make these decisions.

Second, there’s the fact that responsibility for correcting social inequality can be taken on by some institutions that are dedicated to this task while others are procedurally neutral. For example, one can consistently believe in the importance of a progressive social safety net combined with procedurally neutral credit reporting. Society is complex and perhaps rightly has many different functioning parts; not all the parts have to reflect socially progressive values for the arc of history to bend towards justice.

Third, there is reason to believe that even if all institutions were procedurally neutral, there would eventually be social equality. This has to do with the mathematically bulletproof but often ignored phenomenon of regression towards the mean. When values are sampled from a process at random, their average will approach the mean of the distribution as more values are accumulated. In terms of the allocation of resources in a population, there is some random variation in the way resources flow. When institutions are fair, inequality in resource allocation will settle into an unbiased distribution. While their may continue to be some apparent inequality due to disorganized heavy tail effects, these will not be biased, in a political sense.

Fourth, there is the problem of political backlash. Whenever political institutions are weak enough to be modified towards what is purported to be a ‘substantive’ or outside view neutrality, that will always be because some political coalition has attained enough power to swing the pendulum in their favor. The more explicit they are about doing this, the more it will mobilize the enemies of this coallition to try to swing the pendulum back the other way. The result is war by other means, the outcome of which will never be fair, because in war there are many who wind up dead or injured.

I am arguing for a centrist position on these matters, one that favors procedural neutrality in most institutions. This is not because I don’t care about substantive, “outside view” inequality. On the contrary, it’s because I believe that partisan bickering that explicitly undermines the inside neutrality of institutions undermines substantive equality. Partisan bickering over the scraps within narrow institutional frames is a distraction from, for example, the way the most wealthy avoid taxes while the middle class pays even more. There is a reason why political propaganda that induces partisan divisions is a weapon. Agreement about procedural neutrality is a core part of civic unity that allows for collective action against the very most abusively powerful.

References

Zachary C. Lipton, Alexandra Chouldechova, Julian McAuley. “Does mitigating ML’s disparate impact require disparate treatment?” 2017

Notes on fairness and nondiscrimination in machine learning

There has been a lot of work done lately on “fairness in machine learning” and related topics. It cannot be a coincidence that this work has paralleled a rise in political intolerance that is sensitized to issues of gender, race, citizenship, and so on. I more or less stand by my initial reaction to this line of work. But very recently I’ve done a deeper and more responsible dive into this literature and it’s proven to be insightful beyond the narrow problems which it purports to solve. These are some notes on the subject, ordered so as to get to the point.

The subject of whether and to what extent computer systems can enact morally objectionable bias goes back at least as far as Friedman and Nissenbaum’s 1996 article, in which they define “bias” as systematic unfairness. They mean this very generally, not specifically in a political sense (though inclusive of it). Twenty years later, Kleinberg et al. (2016) prove that there are multiple, competing notions of fairness in machine classification which generally cannot be satisfied all at once; they must be traded off against each other. In particular, a classifier that uses all available information to optimize accuracy–one that achieves what these authors call calibration–cannot also have equal false positive and false negative rates across population groups (read: race, sex), properties that Hardt et al. (2016) call “equal opportunity”. This is no doubt inspired by a now very famous ProPublica article asserting that a particular kind of commercial recidivism prediction software was “biased against blacks” because it had a higher false positive rate for black suspects than white offenders. Because bail and parole rates are set according to predicted recidivism, this led to cases where a non-recidivist was denied bail because they were black, which sounds unfair to a lot of people, including myself.

While I understand that there is a lot of high quality and well-intentioned research on this subject, I haven’t found anybody who could tell me why the solution to this problem was to stop using predicted recidivism to set bail, as opposed to futzing around with a recidivism prediction algorithm which seems to have been doing its job (Dieterich et al., 2016). Recidivism rates are actually correlated with race (Hartney and Vuong, 2009). This is probably because of centuries of systematic racism. If you are serious about remediating historical inequality, the least you could do is cut black people some slack on bail.

This gets to what for me is the most baffling aspect of this whole research agenda, one that I didn’t have the words for before reading Barocas and Selbst (2016). A point well-made by them is that the interpretation anti-discrimination law, which motivates a lot of this research, is fraught with tensions that complicate its application to data mining.

“Two competing principles have always undergirded anti-discrimination law: nondiscrimination and antisubordination. Nondiscrimination is the narrower of the two, holding that the responsibility of the law is to eliminate the unfairness individuals experience a the hands of decisionmakers’ choices due to membership in certain protected classes. Antisubordination theory, in contrast, holds that the goal of antidiscrimination law is, or at least should be, to eliminate status-based inequality due to membership in those classes, not as a matter of procedure, but substance.” (Barocas and Selbst, 2016)

More specifically, these two principles motivate different interpretations of the two pillars of anti-discrimination law, disparate treatment and disparate impact. I draw on Barocas and Selbst for my understanding of each:

A judgment of disparate treatment requires either a formal disparate treatment (across protected groups) of similarly situated people, or an intent to discriminate. Since in a large data mining application protected group membership will be proxied by many other factors, it’s not clear if the ‘formal’ requirement makes much sense here. And since machine learning applications only very rarely have racist intent, that option seems challengeable as well. While there are interpretations of these criteria that are tougher on decision-makers (i.e. unconscious intents), these seem to be motivated by antisubordination rather than the weaker nondiscrimination principle.

A judgment of disparate impact is perhaps more straightforward, but it can be mitigated in cases of “business necessity”, which (to get to the point) is vague enough to plausibly include optimization in a technical sense. Once again, there is nothing to see here from a nondiscrimination standpoint, though a nonsubordinationist would rather that these decision-makers have to take correcting for historical inequality into account.

I infer from their writing that Barocas and Selbst believe that nonsubordination is an important principle for nondiscrimination. In any case, they maintain that making the case for applying nondiscrimination laws to data mining effectively requires a commitment to “substantive remediation”. This is insightful!

Just to put my cards on the table: as much as I may like the idea of substantive remediation in principle, I personally don’t think that every application of nondiscrimination law needs to be animated by it. For many institutions, narrow nondiscrimination seems to be adequate if not preferable. I’d prefer remediation to occur through other specific policies, such as more public investment in schools in low-income districts. Perhaps for this reason, I’m not crazy about “fairness in machine learning” as a general technical practice. It seems to me to be trying to solve social problems with a technical fix, which despite being quite technical myself I don’t always see as a good idea. It seems like in most cases you could have a machine learning mechanism based on normal statistical principles (the learning step) and then use a decision procedure separately that achieves your political ends.

I wish that this research community (and here I mean more the qualitative research community surrounding it more than the technical community, which tends to define its terms carefully) would be more careful about the ways it talks about “bias”, because often it seems to encourage a conflation between statistical or technical senses of bias and political senses. The latter carry so much political baggage that it can be intimidating to try to wade in and untangle the two senses. And it’s important to do this untangling, because while bad statistical bias can lead to political bias, it can, depending on the circumstances, lead to either “good” or “bad” political bias. But it’s important, from the sake of numeracy (mathematical literacy) to understand that even if a statistically bad process has a politically “good” outcome, that is still, statistically speaking, bad.

My sense is that there are interpretations of nondiscrimination law that make it illegal to make certain judgments taking into account certain facts about sensitive properties like race and sex. There are also theorems showing that if you don’t take into account those sensitive properties, you are going to discriminate against them by accident because those sensitive variables are correlated with anything else you would use to judge people. As a general principle, while being ignorant may sometimes make things better when you are extremely lucky, in general it makes things worse! This should be a surprise to nobody.

References

Barocas, Solon, and Andrew D. Selbst. “Big data’s disparate impact.” (2016).

Dieterich, William, Christina Mendoza, and Tim Brennan. “COMPAS risk scales: Demonstrating accuracy equity and predictive parity.” Northpoint Inc (2016).

Friedman, Batya, and Helen Nissenbaum. “Bias in computer systems.” ACM Transactions on Information Systems (TOIS) 14.3 (1996): 330-347.

Hardt, Moritz, Eric Price, and Nati Srebro. “Equality of opportunity in supervised learning.” Advances in Neural Information Processing Systems. 2016.

Hartney, Christopher, and Linh Vuong. “Created equal: Racial and ethnic disparities in the US criminal justice system.” (2009).

Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair determination of risk scores.” arXiv preprint arXiv:1609.05807 (2016).