Digifesto

Tag: critical algorithms studies

Reading O’Neil’s Weapons of Math Destruction

I probably should have already read Cathy O’Neil’s Weapons of Math Destruction. It was a blockbuster of the tech/algorithmic ethics discussion. It’s written by an accomplished mathematician, which I admire. I’ve also now seen O’Neil perform bluegrass music twice in New York City and think her band is great. At last I’ve found a copy and have started to dig in.

On the other hand, as is probably clear from other blog posts, I have a hard time swallowing a lot of the gloomy political work that puts the role of algorithms in society in such a negative light. I encounter is very frequently, and every time feel that some misunderstanding must have happened; something seems off.

It’s very clear that O’Neil can’t be accused of mathophobia or not understanding the complexity of the algorithms at play, which is an easy way to throw doubt on the arguments of some technology critics. Yet perhaps because it’s a popular book and not an academic work of Science and Technology Studies, I haven’t it’s arguments parsed through and analyzed in much depth.

This is a start. These are my notes on the introduction.

O’Neil describes the turning point in her career where she soured on math. After being an academic mathematician for some time, O’Neil went to work as a quantitative analyst for D.E. Shaw. She saw it as an opportunity to work in a global laboratory. But then the 2008 financial crisis made her see things differently.

The crash made it all too clear that mathematics, once my refuge, was not only deeply entangled in the world’s problems but also fueling many of them. The housing crisis, the collapse of major financial institutions, the rise of unemployment–all had been aided and abetted by mathematicians wielding magic formulas. What’s more, thanks to the extraordinary powers that I loved so much, math was able to combine with technology to multiply the chaos and misfortune, adding efficiency and scale to systems I now recognized as flawed.

O’Neil, Weapons of Math Destruction, p.2

As an independent reference on the causes of the 2008 financial crisis, which of course has been a hotly debated and disputed topic, I point to Sassen’s 2017 “Predatory Formations” article. Indeed, the systems that developed the sub-prime mortgage market were complex, opaque, and hard to regulate. Something went seriously wrong there.

But was it mathematics that was the problem? This is where I get hung up. I don’t understand the mindset that would attribute a crisis in the financial system to the use of abstract, logical, rigorous thinking. Consider the fact that there would not have been a financial crisis if there had not been a functional financial services system in the first place. Getting a mortgage and paying them off, and the systems that allow this to happen, all require mathematics to function. When these systems operate normally, they are taken for granted. When they suffer a crisis, when the system fails, the mathematics takes the blame. But a system can’t suffer a crisis if it didn’t start working rather well in the first place–otherwise, nobody would depend on it. Meanwhile, the regulatory reaction to the 2008 financial crisis required, of course, more mathematicians working to prevent the same thing from happening again.

So in this case (and I believe others) the question can’t be, whether mathematics, but rather which mathematics. It is so sad to me that these two questions get conflated.

O’Neil goes on to describe a case where an algorithm results in a teacher losing her job for not adding enough value to her students one year. An analysis makes a good case that the cause of her students’ scores not going up is that in the previous year, the students’ scores were inflated by teachers cheating the system. This argument was not consider conclusive enough to change the administrative decision.

Do you see the paradox? An algorithm processes a slew of statistics and comes up with a probability that a certain person might be a bad hire, a risky borrower, a terrorist, or a miserable teacher. That probability is distilled into a score, which can turn someone’s life upside down. And yet when the person fights back, “suggestive” countervailing evidence simply won’t cut it. The case must be ironclad. The human victims of WMDs, we’ll see time and again, are held to a far higher standard of evidence than the algorithms themselves.

O’Neil, WMD, p.10

Now this is a fascinating point, and one that I don’t think has been taken up enough in the critical algorithms literature. It resonates with a point that came up earlier, that traditional collective human decision making is often driven by agreement on narratives, whereas automated decisions can be a qualitatively different kind of collective action because they can make judgments based on probabilistic judgments.

I have to wonder what O’Neil would argue the solution to this problem is. From her rhetoric, it seems like her recommendation must be prevent automated decisions from making probabilistic judgments. In other words, one could raise the evidenciary standard for algorithms so that they we equal to the standards that people use with each other.

That’s an interesting proposal. I’m not sure what the effects of it would be. I expect that the result would be lower expected values of whatever target was being optimized for, since the system would not be able to “take bets” below a certain level of confidence. One wonders if this would be a more or less arbitrary system.

Sadly, in order to evaluate this proposal seriously, one would have to employ mathematics. Which is, in O’Neil’s rhetoric, a form of evil magic. So, perhaps it’s best not to try.

O’Neil attributes the problems of WMD’s to the incentives of the data scientists building the systems. Maybe they know that their work effects people, especially the poor, in negative ways. But they don’t care.

But as a rule, the people running the WMD’s don’t dwell on these errors. Their feedback is money, which is also their incentive. Their systems are engineered to gobble up more data fine-tune their analytics so that more money will pour in. Investors, of course, feast on these returns and shower WMD companies with more money.

O’Neil, WMD, p.13

Calling out greed as the problem is effective and true in a lot of cases. I’ve argued myself that the real root of the technology ethics problem is capitalism: the way investors drive what products get made and deployed. This is a worthwhile point to make and one that doesn’t get made enough.

But the logical implications of this argument are off. Suppose it is true that “as a rule”, the makers of algorithms that do harm are made by people responding to the incentives of private capital. (IF harmful algorithm, THEN private capital created it.) That does not mean that there can’t be good algorithms as well, such as those created in the public sector. In other words, there are algorithms that are not WMDs.

So the insight here has to be that private capital investment corrupts the process of designing algorithms, making them harmful. One could easily make the case that private capital investment corrupts and makes harmful many things that are not algorithmic as well. For example, the historic trans-Atlantic slave trade was a terribly evil manifestation of capitalism. It did not, as far as I know, depend on modern day computer science.

Capitalism here looks to be the root of all evil. The fact that companies are using mathematics is merely incidental. And O’Neil should know that!

Here’s what I find so frustrating about this line of argument. Mathematical literacy is critical for understanding what’s going on with these systems and how to improve society. O’Neil certainly has this literacy. But there are many people who don’t have it. There is a power disparity there which is uncomfortable for everybody. But while O’Neil is admirably raising awareness about how these kinds of technical systems can and do go wrong, the single-minded focus and framing risks giving people the wrong idea that these intellectual tools are always bad or dangerous. That is not a solution to anything, in my view. Ignorance is never more ethical than education. But there is an enormous appetite among ignorant people for being told that it is so.

References

O’Neil, Cathy. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books, 2017.

Sassen, Saskia. “Predatory Formations Dressed in Wall Street Suits and Algorithmic Math.” Science, Technology and Society22.1 (2017): 6-20.

The politics of AI ethics is a seductive diversion from fixing our broken capitalist system

There is a lot of heat these days in the tech policy and ethics discourse. There is an enormous amount of valuable work being done on all fronts. And yet there is also sometimes bitter disciplinary infighting and political intrigue about who has the moral high ground.

The smartest thing I’ve read on this recently is Irina Raicu’s “False Dilemmas” piece, where she argues:

  • “Tech ethics” research, including research explore the space of ethics in algorithm design, is really code for industry self-regulation
  • Industry self-regulation and state regulation are complementary
  • Any claims that “the field” is dominated by one perspective or agenda or another is overstated

All this sounds very sane but it doesn’t exactly explain why there’s all this heated discussion in the first place. I think Luke Stark gets it right:

But what does it mean to say “the problem is mostly capitalism”? And why is it impolite to say it?

To say “the problem [with technology ethics and policy] is capitalism” is to note that most if not all of the social problems we associate with today’s technology have been problems with technology ever since the industrial revolution. For example, James Beniger‘s The Control Revolution, Horkheimer‘s Eclipse of Reason, and so on all speak to the tight link that there has always been between engineering and the capitalist economy as a whole. The link has persisted through the recent iterations of recognizing first data science, then later artificial intelligence, as disruptive triumphs of engineering with a variety of problematic social effects. These are old problems.

It’s impolite to say this because it cuts down on the urgency that might drive political action. More generally, it’s an embarrassment to anybody in the business of talking as if they just discovered something, which is what journalists and many academics do. The buzz of novelty is what gets people’s attention.

It also suggests that the blame for how technology has gone wrong lies with capitalists, meaning, venture capitalists, financiers, and early stage employees with stock options. But also, since it’s the 21st century, pension funds and university endowments are just as much a part of the capitalist investing system as anybody else. In capitalism, if you are saving, you are investing. Lots of people have a diffuse interest in preserving capitalism in some form.

There’s a lot of interesting work to be done on financial regulation, but it has very little to do with, say, science and technology studies and consumer products. So to acknowledge that the problem with technology is capitalism changes the subject to something remote and far more politically awkward than to say the problem is technology or technologists.

As I’ve argued elsewhere, a lot of what’s happening with technology ethics can be thought of as an extension of what Nancy Fraser called progressive neoliberalism: the alliance of neoliberalism with progressive political movements. It is still hegemonic in the smart, critical, academic and advocacy scene. Neoliberalism, or what is today perhaps better characterized as finance capitalism or surveillance capitalism, is what is causing the money to be invested in projects that design and deploy technology in certain ways. It is a system of economic distribution that is still hegemonic.

Because it’s hegemonic, it’s impolite to say so. So instead a lot of the technology criticism gets framed in terms of the next available moral compass, which is progressivism. Progressivism is a system of distribution of recognition. It calls for patterns of recognizing people for their demographic and, because it’s correlated in a sensitive way, professional identities. Nancy Fraser’s insight is that neoliberalism and progressivism have been closely allied for many years. One way that progressivism is allied with neoliberalism is that progressivism serves as a moral smokescreen for problems that are in part caused by neoliberalism, preventing an effective, actionable critique of the root cause of many technology-related problems.

Progressivism encourages political conflict to be articulated as an ‘us vs. them’ problem of populations and their attitudes, rather than as problem of institutions and their design. This “us versus them” framing is baldly stated than in the 2018 AI Now Report:

The AI accountability gap is growing: The technology scandals of 2018 have shown that the gap between those who develop and profit from AI—and those most likely to suffer the consequences of its negative effects—is growing larger, not smaller. There are several reasons for this, including a lack of government regulation, a highly concentrated AI sector, insufficient governance structures within technology companies, power asymmetries between companies and the people they serve, and a stark cultural divide between the engineering cohort responsible for technical research, and the vastly diverse populations where AI systems are deployed. (Emphasis mine)

There are several institutional reforms called for in the report, but the focus on a particular sector that it constructs as “the technology industry” composed on many “AI systems”, it cannot address broader economic issues such as unfair taxation or gerrymandering. Discussion of the overall economy is absent from the report; it is not the cause of anything. Rather, the root cause is a schism between kinds of people. The moral thrust of this claim hinges on the implied progressivism: the AI/tech people, who are developing and profiting, are a culture apart. The victims are “diverse”, and yet paradoxically unified in their culture as not the developers. This framing depends on the appeal of progressivism as a unifying culture whose moral force is due in large part because of its diversity. The AI developer culture is a threat in part because it is separate from diverse people–code for its being white and male.

This thread continues throughout the report, as various critical perspectives are cited in the report. For example:

A second problem relates to the deeper assumptions and worldviews of the designers of ethical codes in the technology industry. In response to the proliferation of corporate ethics initiatives, Greene et al. undertook a systematic critical review of high-profile “vision statements for ethical AI.” One of their findings was that these statements tend to adopt a technologically deterministic worldview, one where ethical agency and decision making was delegated to experts, “a narrow circle of who can or should adjudicate ethical concerns around AI/ML” on behalf of the rest of us. These statements often assert that AI promises both great benefits and risks to a universal humanity, without acknowledgement of more specific risks to marginalized populations. Rather than asking fundamental ethical and political questions about whether AI systems should be built, these documents implicitly frame technological progress as inevitable, calling for better building.

That systematic critical reviews of corporate policies express self-serving views that ultimately promote the legitimacy of the corporate efforts is a surprise to no one; it is no more a surprise than the fact that critical research institutes staffed by lawyers and soft social scientists write reports recommending that their expertise is vitally important for society and justice. As has been the case in every major technology and ethical scandal for years, the first thing the commentariat does is publish a lot of pieces justifying their own positions and, if they are brave, arguing that other people are getting too much attention or money. But since everybody in either business depends on capitalist finance in one way or another, the economic system is not subject to critique. In other words, once can’t argue that industrial visions of ‘ethical AI’ are favorable to building new AI products because they are written in service to capitalist investors who profit from the sale of new AI products. Rather, one must argue that they are written in this way because the authors have a weird technocratic worldview that isn’t diverse enough. One can’t argue that the commercial AI products neglect marginal populations because these populations have less purchasing power; one has to argue that the marginal populations are not represented or recognized enough.

And yet, the report paradoxically both repeatedly claims that AI developers are culturally and politically out of touch and lauds the internal protests at companies like Google that have exposed wrongdoing within those corporations. The actions of “technology industry” employees belies the idea that problem is mainly cultural; there is a managerial profit-making impulse that is, in large, stable companies in particular, distinct from that the rank-and-file engineer. This can be explained in terms of corporate incentives and so on, and indeed the report does in places call for whistleblower protections and labor organizing. But these calls for change cut against and contradict other politically loaded themes.

There are many different arguments contained in the long report; it is hard to find a reasonable position that has been completely omitted. But as a comprehensive survey of recent work on ethics and regulation in AI, its biases and blind spots are indicative of the larger debate. The report concludes with a call for a change in the intellectual basis for considering AI and its impact:

It is imperative that the balance of power shifts back in the public’s favor. This will require significant structural change that goes well beyond a focus on technical systems, including a willingness to alter the standard operational assumptions that govern the modern AI industry players. The current focus on discrete technical fixes to systems should expand to draw on socially-engaged disciplines, histories, and strategies capable of providing a deeper understanding of the various social contexts that shape the development and use of AI systems.

As more universities turn their focus to the study of AI’s social implications, computer science and engineering can no longer be the unquestioned center, but should collaborate more equally with social and humanistic disciplines, as well as with civil society organizations and affected communities. (Emphasis mine)

The “technology ethics” field is often construed, in this report but also in the broader conversation, as one of tension between computer science on the one hand, and socially engaged and humanistic disciplines on the other. For example, Selbst et al.’s “Fairness and Abstraction in Sociotechnical Systems” presents a thorough account of pitfalls of computer science’s approach to fairness in machine learning, and proposes a Science and Technology Studies. The refrain is that by considering more social context, more nuance, and so on, STS and humanistic disciplines avoids the problems that engineers, who try to provide portable, formal solutions, don’t want to address. As the AI Now report frames it, a benefit of the humanistic approach is that it brings the diverse non-AI populations to the table, shifting the balance of power back to the public. STS and related disciplines claim the status of relevant expertise in matters of technology that is somehow not the kind of expertise that is alienating or inaccessible to the public, unlike engineering, which allegedly dominates the higher education system.

I am personally baffled by these arguments; so often they appear to conflate academic disciplines with business practices in ways that most practitioners I engage with would not endorse. (Try asking an engineer how much they learned in school, versus on the job, about what it’s like to work in a corporate setting.) But beyond the strange extrapolation from academic disciplinary disputes (which are so often about the internal bureaucracies of universities it is, I’d argue after learning the hard way, unwise to take them seriously from either an intellectual or political perspective), there is also a profound absence of some fields from the debate, as framed in these reports.

I’m referring to the quantitative social sciences, such as economics and quantitative sociology, or what might be more be more generally converging on computational social science. These are the disciplines that one would need to use to understand the large-scale, systemic impact of technology on people, including the ways costs and benefits are distributed. These disciplines deal with social systems and include technology–there is a long tradition within economics studying the relationship between people, goods, and capital that never once requires the term “sociotechnical”–in a systematic way that can be used to predict the impact of policy. They can also connect, through applications of business and finance, the ways that capital flows and investment drive technology design decisions and corporate competition.

But these fields are awkwardly placed in technology ethics and politics. They don’t fit into the engineering vs. humanities dichotomy that entrances so many graduate students in this field. They often invoke mathematics, which makes them another form of suspicious, alien, insufficiently diverse expertise. And yet, it may be that these fields are the only ones that can correctly diagnose the problems caused by technology in society. In a sense, the progressive framing of the problems of technology makes technogy’s ills a problem of social context because it is unequipped to address them as a problem of economic context, and it wouldn’t want know that it is an economic problem anyway, for two somewhat opposed reasons: (a) acknowledging the underlying economic problems is taboo under hegemonic neoliberalism, and (b) it upsets the progressive view that more popularly accessible (and, if you think about it quantitatively, therefore as a result of how it is generated and constructed more diverse) humanistic fields need to be recognized as much as fields of narrow expertise. There is no credence given to the idea that narrow and mathematized expertise might actually be especially well-suited to understand what the hell is going on, and that this is precisely why members of these fields are so highly sought after by investors to work at their companies. (Consider, for example, who would be best positioned to analyze the “full stack supply chain” of artificial intelligence systems, as is called for by the AI Now report: sociologists, electrical engineers trained in the power use and design of computer chips, or management science/operations research types whose job is to optimize production given the many inputs and contingencies of chip manufacture?)

At the end of the day, the problem with the “technology ethics” debate is a dialectic cycle whereby (a) basic research is done by engineers, (b) that basic research is developed in a corporate setting as a product funded by capitalists, (c) that product raises political hackles and makes the corporations a lot of money, (d) humanities scholars escalate the political hackles, (e) basic researchers try to invent some new basic research because the politics have created more funding opportunities, (f) corporations do some PR work trying to CYA and engage in self-regulation to avoid litigation, (g) humanities scholars, loathe to cede the moral high ground, insist the scientific research is inadequate and that the corporate PR is bull. But this cycle is not necessarily productive. Rather, it sustains itself as part of a larger capitalist system that is bigger than any of these debates, structures its terms, and controls all sides of the dialog. Meanwhile the experts on how that larger system works are silent or ignored.

References

Fraser, Nancy. “Progressive neoliberalism versus reactionary populism: A choice that feminists should refuse.” NORA-Nordic Journal of Feminist and Gender Research 24.4 (2016): 281-284.

Greene, Daniel, Anna Laura Hoffman, and Luke Stark. “Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning.” Hawaii International Conference on System Sciences, Maui, forthcoming. Vol. 2019. 2018.

Raicu, Irina. “False Dilemmas”. 2018.

Selbst, Andrew D., et al. “Fairness and Abstraction in Sociotechnical Systems.” ACM Conference on Fairness, Accountability, and Transparency (FAT*). 2018.

Whittaker, Meredith et al. “AI Now Report 2018”. 2018.

Managerialism as political philosophy

Technologically mediated spaces and organizations are frequently described by their proponents as alternatives to the state. From David Clark’s maxim of Internet architecture, “We reject: kings, presidents and voting. We believe in: rough consensus and running code”, to cyberanarchist efforts to bypass the state via blockchain technology, to the claims that Google and Facebook, as they mediate between billions of users, are relevant non-state actor in international affairs, to Lessig’s (1999) ever prescient claim that “Code is Law”, there is undoubtedly something going on with technology’s relationship to the state which is worth paying attention to.

There is an intellectual temptation (one that I myself am prone to) to take seriously the possibility of a fully autonomous technological alternative to the state. Something like a constitution written in source code has an appeal: it would be clear, precise, and presumably based on something like a consensus of those who participate in its creation. It is also an idea that can be frightening (Give up all control to the machines?) or ridiculous. The example of The DAO, the Ethereum ‘distributed autonomous organization’ that raised millions of dollars only to have them stolen in a technical hack, demonstrates the value of traditional legal institutions which protect the parties that enter contracts with processes that ensure fairness in their interpretation and enforcement.

It is more sociologically accurate, in any case, to consider software, hardware, and data collection not as autonomous actors but as parts of a sociotechnical system that maintains and modifies it. This is obvious to practitioners, who spend their lives negotiating the social systems that create technology. For those for whom it is not obvious, there’s reams of literature on the social embededness of “algorithms” (Gillespie, 2014; Kitchin, 2017). These themes are recited again in recent critical work on Artificial Intelligence; there are those that wisely point out that a functioning artificially intelligent system depends on a lot of labor (those who created and cleaned data, those who built the systems they are implemented on, those that monitor the system as it operates) (Kelkar, 2017). So rather than discussing the role of particular technologies as alternatives to the state, we should shift our focus to the great variety of sociotechnical organizations.

One thing that is apparent, when taking this view, is that states, as traditionally conceived, are themselves sociotechnical organizations. This is, again, an obvious point well illustrated in economic histories such as (Beniger, 1986). Communications infrastructure is necessary for the control and integration of society, let alone effective military logistics. The relationship between those industrial actors developing this infrastructure, whether it be building roads, running a postal service, laying rail or telegram wires, telephone wires, satellites, Internet protocols, and now social media–and the state has always been interesting and a story of great fortunes and shifts in power.

What is apparent after a serious look at this history is that political theory, especially liberal political theory as it developed in the 1700’s an onward as a theory of the relationship between individuals bound by social contract emerging from nature to develop a just state, leaves out essential scientific facts of the matter of how society has ever been governed. Control of communications and control infrastructure has never been equally dispersed and has always been a source of power. Late modern rearticulations of liberal theory and reactions against it (Rawls and Nozick, both) leave out technical constraints on the possibility of governance and even the constitution of the subject on which a theory of justice would have its ground.

Were political theory to begin from a more realistic foundation, it would need to acknowledge the existence of sociotechnical organizations as a political unit. There is a term for this view, “managerialism“, which, as far as I can tell is used somewhat pejoratively, like “neoliberalism”. As an “-ism”, it’s implied that managerialism is an ideology. When we talk about ideologies, what we are doing is looking from an external position onto an interdependent set of beliefs in their social context and identifying, through genealogical method or logical analysis, how those beliefs are symptoms of underlying causes that are not precisely as represented within those beliefs themselves. For example, one critiques neoliberal ideology, which purports that markets are the best way to allocate resources and advocates for the expansion of market logic into more domains of social and political life, but pointing out that markets are great for reallocating resources to capitalists, who bankroll neoliberal ideologues, but that many people who are subject to neoliberal policies do not benefit from them. While this is a bit of a parody of both neoliberalism and the critiques of it, you’ll catch my meaning.

We might avoid the pitfalls of an ideological managerialism (I’m not sure what those would be, exactly, having not read the critiques) by taking from it, to begin with, only the urgency of describing social reality in terms of organization and management without assuming any particular normative stake. It will be argued that this is not a neutral stance because to posit that there is organization, and that there is management, is to offend certain kinds of (mainly academic) thinkers. I get the sense that this offendedness is similar to the offense taken by certain critical scholars to the idea that there is such a thing as scientific knowledge, especially social scientific knowledge. Namely, it is an offense taken to the idea that a patently obvious fact entails ones own ignorance of otherwise very important expertise. This is encouraged by the institutional incentives of social science research. Social scientists are required to maintain an aura of expertise even when their particular sub-discipline excludes from its analysis the very systems of bureaucratic and technical management that its university depends on. University bureaucracies are, strangely, in the business of hiding their managerialist reality from their own faculty, as alternative avenues of research inquiry are of course compelling in their own right. When managerialism cannot be contested on epistemic grounds (because the bluff has been called), it can be rejected on aesthetic grounds: managerialism is not “interesting” to a discipline, perhaps because it does not engage with the personal and political motivations that constitute it.

What sets managerialism aside from other ideologies, however, is that when we examine its roots in social context, we do not discover a contradiction. Managerialism is not, as far as I can tell, successful as a popular ideology. Managerialism is attractive only to that rare segment of the population that work closely with bureaucratic management. It is here that the technical constraints of information flow and its potential uses, the limits of autonomy especially as it confronts the autonomies of others, the persistence of hierarchy despite the purported flattening of social relations, and so on become unavoidable features of life. And though one discovers in these situations plenty of managerial incompetence, one also comes to terms with why that incompetence is a necessary feature of the organizations that maintain it.

Little of what I am saying here is new, of course. It is only new in relation to more popular or appealing forms of criticism of the relationship between technology, organizations, power, and ethics. So often the political theory implicit in these critiques is a form of naive egalitarianism that sees a differential in power as an ethical red flag. Since technology can give organizations a lot of power, this generates a lot of heat around technology ethics. Starting from the perspective of an ethicist, one sees an uphill battle against an increasingly inscrutable and unaccountable sociotechnical apparatus. What I am proposing is that we look at things a different way. If we start from general principles about technology its role in organizations–the kinds of principles one would get from an analysis of microeconomic theory, artificial intelligence as a mathematical discipline, and so on–one can try to formulate managerial constraints that truly confront society. These constraints are part of how subjects are constituted and should inform what we see as “ethical”. If we can broker between these hard constraints and the societal values at stake, we might come up with a principle of justice that, if unpopular, may at least be realistic. This would be a contribution, at the end of the day, to political theory, not as an ideology, but as a philosophical advance.

References

Beniger, James R. “The Control Revolution: Technological and Economic Origins of the.” Information Society (1986).

Bird, Sarah, et al. “Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI.” (2016).

Gillespie, Tarleton. “The relevance of algorithms.” Media technologies: Essays on communication, materiality, and society 167 (2014).

Kelkar, Shreeharsh. “How (Not) to Talk about AI.” Platypus, 12 Apr. 2017, blog.castac.org/2017/04/how-not-to-talk-about-ai/.

Kitchin, Rob. “Thinking critically about and researching algorithms.” Information, Communication & Society 20.1 (2017): 14-29.

Lessig, Lawrence. “Code is law.” The Industry Standard 18 (1999).

“Transactions that are too complex…to be allowed to exist.” cf @FrankPasquale

I stand corrected; my interpretation of Pasquale in my last post was too narrow. Having completed Chapter One of The Black Box Society (TBBS), Pasquale does not take the naive view that all organizational secrecy should be abolished, as I might have once. Rather, his is a more nuanced perspective.

First, Pasquale distinguishes between three “critical strategies for keeping black boxes closed”, or opacity, “[Pasquale’s] blanket term for remediable incomprehensibility”:

  • Real secrecy establishes a barrier between hidden content and unauthorized access to it.”
  • Legal secrecy obliges those privy to certain information to keep it secret”
  • Obfuscation involves deliberate attempts at concealment when secrecy has been compromised.”

Cutting to the chase by looking at the Pasquale and Bracha “Federal Search Commission” (2008) paper that a number of people have recommended to me, it appears (in my limited reading so far) that Pasquale’s position is not that opacity in general is a problem (because there are of course important uses of opacity that serve the public interest, such as confidentiality). Rather, despite these legitimate uses of opacity there is also the need for public oversight, perhaps through federal regulation. The Federal Government serves the public interest better than the imperfect market for search can provide on its own.

There is perhaps a tension between this 2008 position and what is expressed in Chapter 1 of TBBS in the section “The One-Way Mirror,” which gets I dare say a little conspiratorial about The Powers That Be. “We are increasingly ruled by what former political insider Jeff Connaughton called ‘The Blob,’ a shadowy network of actors who mobilize money and media for private gain, whether acting officially on behalf of business or of government.” Here, Pasquale appears to espouse a strong theory of regulatory capture from which, we we to insist on consistency, a Federal Search Commission would presumably not be exempt. Hence perhaps the role of TBBS in stirring popular sentiment to put political pressure on the elites of The Blob.

Though it is a digression I will note, since it is a pet peeve of mine, Pasquale’s objection to mathematized governance:

“Technocrats and managers cloak contestable value judgments in the garb of ‘science’: thus the insatiable demand for mathematical models that reframe the subtle and subjective conclusions (such as the worth of a worker, service, article, or product) as the inevitable dictate of salient, measurable data. Big data driven decisions may lead to unprecedented profits. But once we use computation not merely to exercise power over things, but also over people, we need to develop a much more robust ethical framework than ‘the Blob’ is now willing to entertain.”

That this sentiment that scientists should not be making political decisions has been articulated since at least as early as Hannah Arendt’s 1958 The Human Condition is an indication that there is nothing particular to Big Data about this anxiety. And indeed, if we think about ‘computation’ as broadly as mathematized, algorithmic thought, then its use for control over people-not-just-things has an even longer history. Lukacs’ 1923 “Reification and the Consciousness of the Proletariat” is a profound critique of Tayloristic scientific factory management that is getting close to being a hundred years old.

Perhaps a robust ethics of quantification has been in the works for some time as well.

Moving past this, by the end of Chapter 1 of TBBS Pasquale gives us the outline of the book and the true crux of his critique, which is the problem of complexity. Whether or not regulators are successful in opening the black boxes of Silicon Valley or Wall Street (or the branches of government that are complicit with Silicon Valley and Wall Street), their efforts will be in vain if what they get back from the organizations they are trying to regulate is too complex for them to understand.

Following the thrust of Pasquale’s argument, we can see that for him, complexity is the result of obfuscation. It is therefore a source of opacity, which as we have noted he has defined as “remediable incomprehensibility”. Pasquale promises to, by the end of the book, give us a game plan for creating, legally, the Intelligible Society. “Transactions that are too complex to explain to outsiders may well be too complex to be allowed to exist.”

This gets us back to the question we started with, which is whether this complexity and incomprehensibility is avoidable. Suppose we were to legislate against institutional complexity: what would that cost us?

Mathematical modeling gives us the tools we need to analyze these kinds of question. Information theory, theory of computational, and complexity theory are all foundational to the technology of telecommunications and data science. People with expertise in understanding complexity and the limitations we have of controlling it are precisely the people who make the ubiquitous algorithms which society depends on today. But this kind of theory rarely makes it into “critical” literature such as TBBS.

I’m drawn to the example of The Social Media Collective’s Critical Algorithm Studies Reading List, which lists Pasquale’s TBBS among many other works, because it opens with precisely the disciplinary gatekeeping that creates what I fear is the blind spot I’m pointing to:

This list is an attempt to collect and categorize a growing critical literature on algorithms as social concerns. The work included spans sociology, anthropology, science and technology studies, geography, communication, media studies, and legal studies, among others. Our interest in assembling this list was to catalog the emergence of “algorithms” as objects of interest for disciplines beyond mathematics, computer science, and software engineering.

As a result, our list does not contain much writing by computer scientists, nor does it cover potentially relevant work on topics such as quantification, rationalization, automation, software more generally, or big data, although these interests are well-represented in these works’ reference sections of the essays themselves.

This area is growing in size and popularity so quickly that many contributions are popping up without reference to work from disciplinary neighbors. One goal for this list is to help nascent scholars of algorithms to identify broader conversations across disciplines and to avoid reinventing the wheel or falling into analytic traps that other scholars have already identified.

This reading list is framed as a tool for scholars, which it no doubt is. But if contributors to this field of scholarship aspire, as Pasquale does, for “critical algorithms studies” to have real policy ramifications, then this disciplinary wall must fall (as I’ve argued this elsewhere).