Digifesto

Tag: social science

Managerialism as political philosophy

Technologically mediated spaces and organizations are frequently described by their proponents as alternatives to the state. From David Clark’s maxim of Internet architecture, “We reject: kings, presidents and voting. We believe in: rough consensus and running code”, to cyberanarchist efforts to bypass the state via blockchain technology, to the claims that Google and Facebook, as they mediate between billions of users, are relevant non-state actor in international affairs, to Lessig’s (1999) ever prescient claim that “Code is Law”, there is undoubtedly something going on with technology’s relationship to the state which is worth paying attention to.

There is an intellectual temptation (one that I myself am prone to) to take seriously the possibility of a fully autonomous technological alternative to the state. Something like a constitution written in source code has an appeal: it would be clear, precise, and presumably based on something like a consensus of those who participate in its creation. It is also an idea that can be frightening (Give up all control to the machines?) or ridiculous. The example of The DAO, the Ethereum ‘distributed autonomous organization’ that raised millions of dollars only to have them stolen in a technical hack, demonstrates the value of traditional legal institutions which protect the parties that enter contracts with processes that ensure fairness in their interpretation and enforcement.

It is more sociologically accurate, in any case, to consider software, hardware, and data collection not as autonomous actors but as parts of a sociotechnical system that maintains and modifies it. This is obvious to practitioners, who spend their lives negotiating the social systems that create technology. For those for whom it is not obvious, there’s reams of literature on the social embededness of “algorithms” (Gillespie, 2014; Kitchin, 2017). These themes are recited again in recent critical work on Artificial Intelligence; there are those that wisely point out that a functioning artificially intelligent system depends on a lot of labor (those who created and cleaned data, those who built the systems they are implemented on, those that monitor the system as it operates) (Kelkar, 2017). So rather than discussing the role of particular technologies as alternatives to the state, we should shift our focus to the great variety of sociotechnical organizations.

One thing that is apparent, when taking this view, is that states, as traditionally conceived, are themselves sociotechnical organizations. This is, again, an obvious point well illustrated in economic histories such as (Beniger, 1986). Communications infrastructure is necessary for the control and integration of society, let alone effective military logistics. The relationship between those industrial actors developing this infrastructure, whether it be building roads, running a postal service, laying rail or telegram wires, telephone wires, satellites, Internet protocols, and now social media–and the state has always been interesting and a story of great fortunes and shifts in power.

What is apparent after a serious look at this history is that political theory, especially liberal political theory as it developed in the 1700’s an onward as a theory of the relationship between individuals bound by social contract emerging from nature to develop a just state, leaves out essential scientific facts of the matter of how society has ever been governed. Control of communications and control infrastructure has never been equally dispersed and has always been a source of power. Late modern rearticulations of liberal theory and reactions against it (Rawls and Nozick, both) leave out technical constraints on the possibility of governance and even the constitution of the subject on which a theory of justice would have its ground.

Were political theory to begin from a more realistic foundation, it would need to acknowledge the existence of sociotechnical organizations as a political unit. There is a term for this view, “managerialism“, which, as far as I can tell is used somewhat pejoratively, like “neoliberalism”. As an “-ism”, it’s implied that managerialism is an ideology. When we talk about ideologies, what we are doing is looking from an external position onto an interdependent set of beliefs in their social context and identifying, through genealogical method or logical analysis, how those beliefs are symptoms of underlying causes that are not precisely as represented within those beliefs themselves. For example, one critiques neoliberal ideology, which purports that markets are the best way to allocate resources and advocates for the expansion of market logic into more domains of social and political life, but pointing out that markets are great for reallocating resources to capitalists, who bankroll neoliberal ideologues, but that many people who are subject to neoliberal policies do not benefit from them. While this is a bit of a parody of both neoliberalism and the critiques of it, you’ll catch my meaning.

We might avoid the pitfalls of an ideological managerialism (I’m not sure what those would be, exactly, having not read the critiques) by taking from it, to begin with, only the urgency of describing social reality in terms of organization and management without assuming any particular normative stake. It will be argued that this is not a neutral stance because to posit that there is organization, and that there is management, is to offend certain kinds of (mainly academic) thinkers. I get the sense that this offendedness is similar to the offense taken by certain critical scholars to the idea that there is such a thing as scientific knowledge, especially social scientific knowledge. Namely, it is an offense taken to the idea that a patently obvious fact entails ones own ignorance of otherwise very important expertise. This is encouraged by the institutional incentives of social science research. Social scientists are required to maintain an aura of expertise even when their particular sub-discipline excludes from its analysis the very systems of bureaucratic and technical management that its university depends on. University bureaucracies are, strangely, in the business of hiding their managerialist reality from their own faculty, as alternative avenues of research inquiry are of course compelling in their own right. When managerialism cannot be contested on epistemic grounds (because the bluff has been called), it can be rejected on aesthetic grounds: managerialism is not “interesting” to a discipline, perhaps because it does not engage with the personal and political motivations that constitute it.

What sets managerialism aside from other ideologies, however, is that when we examine its roots in social context, we do not discover a contradiction. Managerialism is not, as far as I can tell, successful as a popular ideology. Managerialism is attractive only to that rare segment of the population that work closely with bureaucratic management. It is here that the technical constraints of information flow and its potential uses, the limits of autonomy especially as it confronts the autonomies of others, the persistence of hierarchy despite the purported flattening of social relations, and so on become unavoidable features of life. And though one discovers in these situations plenty of managerial incompetence, one also comes to terms with why that incompetence is a necessary feature of the organizations that maintain it.

Little of what I am saying here is new, of course. It is only new in relation to more popular or appealing forms of criticism of the relationship between technology, organizations, power, and ethics. So often the political theory implicit in these critiques is a form of naive egalitarianism that sees a differential in power as an ethical red flag. Since technology can give organizations a lot of power, this generates a lot of heat around technology ethics. Starting from the perspective of an ethicist, one sees an uphill battle against an increasingly inscrutable and unaccountable sociotechnical apparatus. What I am proposing is that we look at things a different way. If we start from general principles about technology its role in organizations–the kinds of principles one would get from an analysis of microeconomic theory, artificial intelligence as a mathematical discipline, and so on–one can try to formulate managerial constraints that truly confront society. These constraints are part of how subjects are constituted and should inform what we see as “ethical”. If we can broker between these hard constraints and the societal values at stake, we might come up with a principle of justice that, if unpopular, may at least be realistic. This would be a contribution, at the end of the day, to political theory, not as an ideology, but as a philosophical advance.

References

Beniger, James R. “The Control Revolution: Technological and Economic Origins of the.” Information Society (1986).

Bird, Sarah, et al. “Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI.” (2016).

Gillespie, Tarleton. “The relevance of algorithms.” Media technologies: Essays on communication, materiality, and society 167 (2014).

Kelkar, Shreeharsh. “How (Not) to Talk about AI.” Platypus, 12 Apr. 2017, blog.castac.org/2017/04/how-not-to-talk-about-ai/.

Kitchin, Rob. “Thinking critically about and researching algorithms.” Information, Communication & Society 20.1 (2017): 14-29.

Lessig, Lawrence. “Code is law.” The Industry Standard 18 (1999).

three kinds of social explanation: functionalism, politics, and chaos

Roughly speaking, I think there are three kinds of social explanation. I mean “explanation” in a very thick sense; an explanation is an account of why some phenomenon is the way it is, grounded in some kind of theory that could be used to explain other phenomena as well. To say there are three kinds of social explanation is roughly equivalent to saying there are three ways to model social processes.

The first of these kind of social explanation is functionalism. This explains some social phenomenon in terms of the purpose that it serves. Generally speaking, fulfilling this purpose is seen as necessary for the survival or continuation of the phenomenon. Maybe it simply is the continued survival of the social organism that is its purpose. A kind of agency, though probably very limited, is ascribed to the entire social process. The activity internal to the process is then explained by the purpose that it serves.

The second kind of social explanation is politics. Political explanations focus on the agencies of the participants within the social system and reject the unifying agency of the whole. Explanations based on class conflict or personal ambition are political explanations. Political explanations of social organization make it out to be the result of a complex of incentives and activity. Where there is social regularity, it is because of the political interests of some of its participants in the continuation of the organization.

The third kind of social explanation is hardly an explanation at all. It is explanation by chaos. This sort of explanation is quite rare, as it does not provide much of the psychological satisfaction we like from explanations. I mention it here because I think it is an underutilized mode of explanation. In large populations, much of the activity that happens will do so by chance. Even large organizations may form according to stochastic principles that do not depend on any real kind of coordinated or purposeful effort.

It is important to consider chaotic explanation of social processes when we consider the limits of political expertise. If we have a low opinion of any particular person’s ability to understand their social environment and act strategically, then we must accept that much of their “politically” motivated actions will be based on misconceptions and therefore be, in an objective sense, random. At this point political explanations become facile, and social regularity has to be explained either in terms of the ability of social organizations qua organizations to survive, or the organization must be explained in a deflationary way: i.e., that the organization is not really there, but just in the eye of the beholder.

And now for something completely different: Superintelligence and the social sciences

This semester I’ll be co-organizing, with Mahendra Prasad, a seminar on the subject of “Superintelligence and the Social Sciences”.

How I managed to find myself in this role is a bit of a long story. But as I’ve had a longstanding curiosity about this topic, I am glad to be putting energy into the seminar. It’s a great opportunity to get exposure to some of the very interesting work done by MIRI on this subject. It’s also a chance to thoroughly investigate (and critique) Bostrom’s book Superintelligence: Paths, Dangers, and Strategies.

I find the subject matter perplexing because in many ways it forces the very cultural and intellectual clash that I’ve been preoccupied with elsewhere on this blog: the failure of social scientists and engineers to communicate. Or, perhaps, the failure of qualitative researchers and quantitative researchers to communicate. Whatever you want to call it.

Broadly, the question at stake is: what impact will artificial intelligence have on society? This question is already misleading since in the imagination of most people who haven’t been trained in the subject, “artificial intelligence” refers to something of a science fiction scenario, whereas to practitioner, “artificial intelligence” is, basically, just software. Just as the press went wild last year speculating about “algorithms”, by which it meant software, so too is the press excited about artificial intelligence, which is just software.

But the concern that software is responsible for more and more of the activity in the world and that it is in a sense “smarter than us”, and especially the fear that it might become vastly smarter than us (i.e. turning into what Bostrom calls a “superintelligence”), is pervasive enough to drive research funding into topics like “AI Safety”. It also is apparently inspiring legal study into the regulation of autonomous systems. It may also have implications for what is called, vaguely, “social science”, though increasingly it seems like nobody really knows what that is.

There is a serious epistemological problem here. Some researchers are trying to predict or forewarn the societal impact of agents that are by assumption beyond their comprehension on the premise that they may come into existence at any moment.

This is fascinating but one has to get a grip.

a refinement

If knowledge is situated, and scientific knowledge is the product of rational consensus among diverse constituents, then a social organization that unifies many different social units functionally will have a ‘scientific’ ideology or rationale that is specific to the situation of that organization.

In other words, the political ideology of a group of people will be part of the glue that constitutes the group. Social beliefs will be a component of the collective identity.

A social science may be the elaboration of one such ideology. Many have been. So social scientific beliefs are about capturing the conditions for the social organization which maintains that belief. (c.f. Nietzsche on tablets of values)

There are good reasons to teach these specialized social sciences as a part of vocational training for certain functions. For example, people who work in finance or business can benefit from learning economics.

Only in an academic context does the professional identity of disciplinary affiliation matter. This academic political context creates great division and confusion that merely reflects the disorganization of the academic system.

This disorganization is fruitful precisely because it allows for individuality (cf. Horkheimer). However, it is also inefficient and easy to corrupt. Hmm.

Against this, not all knowledge is situated. Some is universal. It’s universality is due to its pragmatic usefulness in technical design. Since technical design acts on everyone even when their own situated understanding does not include it, this kind of knowledge has universal ground (in violence, sadly, but maybe also in other ways.)

The question is whether there is room anywhere in the technically correct understanding of social organization (something we might see in Beniger) there is room for the articulation of what it supposed to be great and worthy of man (see Horkheimer).

I have thought for a long time that there is probably something like this describable in terms of complexity theory.

Arendt on social science

Despite my first (perhaps kneejerk) reaction to Arendt’s The Human Condition, as I read further I am finding it one of the most profoundly insightful books I’ve ever read.

It is difficult to summarize: not because it is written badly, but because it is written well. I feel every paragraph has real substance to it.

Here’s an example: Arendt’s take on the modern social sciences:

To gauge the extent of society’s victory in the modern age, its early substitution of behavior for action and its eventual substitution of bureaucracy, the rule of nobody, for personal rulership, it may be well to recall that its initial science of economics, which substitutes patterns of behavior only in this rather limited field of human activity, was finally followed by the all-comprehensive pretension of the social sciences which, as “behavioral sciences,” aim to reduce man as a whole, in all his activities, to the level of a conditioned and behaving animal. If economics is the science of society in its early stages, when it could impose its rules of behavior only on sections of the population and on parts of their activities, the rise of the “behavioral sciences” indicates clearly the final stage of this development, when mass society has devoured all strata of the nation and “social behavior” has become the standard for all regions of life.

To understand this paragraph, one has to know what Arendt means by society. She introduces the idea of society in contrast to the Ancient Greek polis, which is the sphere of life in Antiquity where the head of a household could meet with other heads of households to discuss public matters. Importantly for Arendt, all concerns relating to the basic maintenance and furthering of life–food, shelter, reproduction, etc.–were part of the private domain, not the polis. Participation in public affairs was for those who were otherwise self-sufficient. In their freedom, they would compete to outdo each other in acts and words that would resonate beyond their lifetime, deeds, through which they could aspire to immortality.

Society, in contrast, is what happens when the mass of people begin to organize themselves as if they were part of one household. The conditions of maintaining life are public. In modern society, people are defined by their job; even being the ruler is just another job. Deviation from ones role in society in an attempt to make a lasting change–deeds–are considered disruptive, and so are rejected by the norms of society.

From here, we get Arendt’s critique of the social sciences, which is essentially this: that is only possible to have a social science that finds regularities of people’s behavior when their behavior has been regularized by society. So the social sciences are not discovering a truth about people en masse that was not known before. The social sciences aren’t discovering things about people. They are rather reflecting the society as it is. The more that the masses are effectively ‘socialized’, the more pervasive a generalizing social science can be, because only under those conditions are there regularities there to be captured as knowledge and taught.