Digifesto

Tag: science and technology studies

Why STS is not the solution to “tech ethics”

“Tech ethics” are in (1) (2) (3) and a popular refrain at FAT* this year was that sensitivity to social and political context is the solution to the problems of unethical technology. How do we bring this sensitivity to technical design? Using the techniques of Science and Technology Studies (STS), argue variously Dobbe and Ames, as well as Selbst et al. (2019). Value Sensitive Design (VSD) (Friedman and Bainbridge, 2004) is one typical STS technique proposed technique for bringing this political awareness into the design process. In general, there is broad agreement that computer scientists should be working with social scientists when developing socially impactful technologies.

In this blog post, I argue that STS is not the solution to “tech ethics” that it tries to be.

Encouraging computer scientists to collaborate with social science domain experts is a great idea. My paper with Bruce Haynes (1) (2) (3) is an example of this kind of work. In it, we drew from sociology of race to inform a technical design that addressed the unfairness of racial categories. Significantly, in my view, we did not use STS in our work. Because the social injustices we were addressing were due to broad reaching social structures and politically constructed categories, we used sociology to elucidate what was at stake and what sorts of interventions would be a good idea.

It is important to recognize that there are many different social sciences dealing with “social and political context”, and that STS, despite its interdisciplinarity, is only one of them. This is easily missed in an interdisciplinary venue in which STS is active, because STS is somewhat activist in asserting its own importance in these venues. In a sense, STS frequently positions itself as a reminder to blindered technologists that there is a social world out there. “Let me tell you about what you’re missing!” That’s it’s shtick. Because of this positioning, STS scholars frequently get a seat at the table with scientists and technologists. It’s a powerful position, in sense.

What STS scholar tend to ignore is how and when other forms of social scientists involve themselves in the process of technical design. For example, at FAT* this year there were two full tracks of Economic Models. Economic Models. Economics is a well-established social scientific discipline that has tools for understanding how a particular mechanism can have unintended effects when put into a social context. In economics, this is called “mechanism design”. It addresses what Selbst et al. might call the “Ripple Effect Trap”–the fact that a system in context may have effects that are different from the intention of designers. I’ve argued before that wiser economics are something we need to better address technology ethics, especially if we are talking about technology deployed by industry, which is most of it! But despite deep and systematic social scientific analysis of secondary and equilibrium effects at the conference, these peer-reviewed works are not acknowledged by STS interventionists. Why is that?

As usual, quantitative social scientists are completely ignored by STS-inspired critiques of technologists and their ethics. That is too bad, because at the scale at which these technologies are operating (mainly, we are discussing civic- or web-scale automated decision making systems that are inherently about large numbers of people), fuzzier debates about “values” and contextualized impact would surely benefit from quantitative operationalization.

The problem is that STS is, at its heart, a humanistic discipline, a subfield of anthropology. If and when STS does not deny the utility or truth or value of mathematization or quantification entirely, as a field of research it is methodologically skeptical about such things. In the self-conception of STS, this methodological relativism is part of its ethnographic rigor. This ethnographic relativism is more or less entirely incompatible with formal reasoning, which aspires to universal internal validity. At a moralistic level, it is this aspiration of universal internal validity that is so bedeviling to the STS scholar: the mathematics are inherently distinct from an awareness of the social context, because social context can only be understood in its ethnographic particularity.

This is a false dichotomy. There are other social sciences that address social and political context that do not have the same restrictive assumptions of STS. Some of these are quantitative, but not all of them are. There are qualitative sociologists and political scientists with great insights into social context that are not disciplinarily allergic to the standard practices of engineering. In many ways, these kinds of social sciences are far more compatible with the process of designing technology than STS! For example, the sociology we draw on in our “Racial categories in machine learning” paper is variously: Gramscian racial hegemony theory, structuralist sociology, Bourdieusian theories of social capital, and so on. Significantly, these theories are not based exclusively on ethnographic method. They are based on disciplines that happily mix historical and qualitative scholarship with quantitative research. The object of study is the social world, and part of the purpose of the research is to develop politically useful abstractions from it that generalize and can be measured. This is the form of social sciences that is compatible with quantitative policy evaluation, the sort of thing you would want to use if, for example, understanding the impact of an affirmative action policy.

Given the widely acknowledge truism that public sector technology design often encodes and enacts real policy changes (a point made in Deirdre Mulligan’s keynote), it would make sense to understand the effects of these technologies using the methodologies of policy impact evaluation. That would involve enlisting the kinds of social scientific expertise relevant to understand society at large!

But that is absolutely not what STS has to offer. STS is, at best, offering a humanistic evaluation of the social processes of technology design. The ontology of STS is flat, and its epistemology and ethics are immediate: the design decision comes down to a calculus of “values” of different “stakeholders”. Ironically, this is a picture of social context that often seems to neglect the political and economic context of that context. It is not an escape from empty abstraction. Rather, it insists on moving from clear abstractions to more nebulous ones, “values” like “fairness”, maintaining that if the conversation never ends and the design never gets formalized, ethics has been accomplished.

This has proven, again and again, to be a rhetorically effective position for research scholarship. It is quite popular among “ethics” researchers that are backed by corporate technology companies. That is quite possibly because the form of “ethics” that STS offers, for all of its calls for political sensitivity, is devoid of political substance. An apples-to-apples comparison of “values”, without considering the social origins of those values and the way those values are grounded in political interests that are not merely about “what we think is important in life”, but real contests over resource allocation. The observation by Ames et al. (2011) that people’s values with respect to technology varies with socio-economic class is terribly relevant, Bourdieusian lesson in how the standpoint of “values sensitivity” may, when taken seriously, run up against the hard realities of political agonism. I don’t believe STS researchers are truly naive about these points; however, in their rhetoric of design intervention, conducted in labs but isolated from the real conditions of technology firms, there is an idealism that can only survive under the self-imposed severity of STS’s own methodological restrictions.

Independent scholars can take up this position and publish daring pieces, winning the moral high ground. But that is not a serious position to take in an industrial setting, or when pursuing generalizable knowledge about the downstream impact of a design on a complex social system. Those empirical questions require different tools, albeit far more unwieldy ones. Complex survey instruments, skilled data analysis, and substantive social theory are needed to arrive at solid conclusions about the ethical impact of technology.

References

Ames, M. G., Go, J., Kaye, J. J., & Spasojevic, M. (2011, March). Understanding technology choices and values through social class. In Proceedings of the ACM 2011 conference on Computer supported cooperative work (pp. 55-64). ACM.

Friedman, B., & Bainbridge, W. S. (2004). Value sensitive design.

Selbst, A. D., Friedler, S., Venkatasubramanian, S., & Vertesi, J. (2018, August). Fairness and Abstraction in Sociotechnical Systems. In ACM Conference on Fairness, Accountability, and Transparency (FAT*).

Advertisements

Differing ethnographic accounts of the effectiveness of technology

I’m curious as I compare two recent papers, one by Christin [2017] and one by Levy [2015], both about the role of technology in society. and backed by ethnographic data.

What interests me is that the two papers both examine the use of algorithms in practice, but they differ in their account of the effectiveness of the algorithms used. Christin emphasizes the way web journalists and legal professionals deliberately undermine the impact of algorithms. Levy discusses how electronic monitoring achieves central organizational control over truckers.

I’m interested in the different framings because, as Christin points out, a central point of contention in the critical scholarship around data and algorithms is the effectiveness of the technology, especially “in practice”. Implicitly if not explicitly, if the technology is not as effective as its advocates say it is, then it is overhyped and this debunking is an accomplishment of the critical and often ethnographic field.

On the other hand, if the technology is effective at control, as Levy’s article argues that it is, then it poses a much more real managerialist threat to worker’s autonomy. Identifying that this is occurring is also a serious accomplishment of the ethnographic field.

What must be recognized, however, is that these two positions contradict each other, at least as general perspectives on data-collection and algorithmic decision-making. The use of a particular technology in a particular place cannot be both so ineffective as to be overhyped and so effective as to constitute a managerialist threat. The substance of the two critiques is at odds with each other, and they call for different pragmatic responses. The former suggests a rhetorical strategy of further debunking, the latter demands a material strategy of changing working conditions.

I have seen both strategies used in critical scholarship, sometimes even in the same article, chapter, or book. I have never seen critical scholars attempt to resolve this difference between themselves using their shared assumptions and methods. I’d like to see more resolution in the ethnographic field on this point.

Correction, 8/10/17:

The apparent tension is resolved on a closer reading of Christin (2017). The argument there is that technology (in the managerialist use common to both papers) is ineffective when its intended use is resisted by those being managed by it.

That shifts the ethnographic challenge to technology away from an attack on the technical quality of the work (which is a non-starter) to accomplish what it is designed to do, but rather on the uncontroversial proposition that the effectiveness of technology depends in part on assumptions on how it will be used, and that these assumptions can be violated.

The political question of to what extent these new technologies should be adopted can then be addressed straightforwardly in terms of whether or not it is fully and properly adopted, or only partially and improperly adopted. Using language like this would be helpful in bridging technical and ethnographic fields.

References

Christin, 2017. “Algorithms in practice: Comparing journalism and criminal justice.” (link)

Levy, 2015. “The Contexts of Control: Information, Power, and Truck-Driving Work.” (link)

One Magisterium: a review (part 1)

I have come upon a remarkable book, titled One Magisterium: How Nature Knows Through Us, by Seán Ó Nualláin, President, University of Ireland, California. It is dedicated “To all working at the edges of society in an uncompromising search for truth and justice.” It’s acknowledgement section opens:

Kenyan middle-distance runners were famous for running like “scared rabbits”: going straight to the head of the field and staying there, come what may. Even more than was the case for my other books, I wrote this like a scared rabbit.”

Ó Nualláin is a recognizable face at UC Berkeley though I think it’s fair to say that most of the faculty and PhD students couldn’t tell you who he is. To a mainstream academic, he is one of the nebulous class of people who show up to events. One glorious loophole of university culture is that the riches of intellectual communion are often made available in open seminars held by people so weary of obscurity that they are happy for any warm body that cares enough to attend. This condition combined with the city of Berkeley’s accommodating attitude towards quacks and vagrants adds flavor to the university’s intellectual character.

There is of course no campus for the University of Ireland, California. Ó Nualláin is a truly independent scholar. Unlike many more unfortunate intellectuals, he has made the brilliant decision to not quit his day job, which is as a musician. A Google inquiry into the man indicates he probably got his PhD from Dublin City University and spent a good deal of time around Stanford’s Symbolic Systems department. (EDIT: Sean has corrected me on the details of his accomplished biography in the comments.)

I got on his mailing lists some time ago because of my interest in the Foundations of Mind conference, which he runs in Berkeley. Later, I was impressed by his aggressive volley of questions when Nick Bostrom spoke at Berkeley (I’ve become familiar with Bostrom’s work through MIRI (formerly SingInst). I’ve spoken to him just a couple times, once at a poster session at the Berkeley Institute of Data Science and once at Katy Huff’s scientific technology practice group, The Hacker Within.

I’m providing these details out of what you might call anthropological interest. At the School of Information I’ve somehow caught the bug of Science and Technology Studies by osmosis. Now I work for Charlotte Cabasse on her ethnographic team, despite believing myself to be a computational social scientist. This qualitative work is a wonderful excuse to write about ones experiences.

My perceptions of Ó Nualláin are relevant, then, because they situate the author of One Magisterium as an outsider to the academic mainstream at Berkeley. This outsider status comes through quite heavily in the book, starting from the Acknowledgments section (which recognizes all the service staff at the bars and coffee shops where he wrote the book) and running as a regular theme throughout. Discontent with and rejection from academia-as-usual are articulated in sublimated form as harsh critique of the academic institution. Ó Nualláin is engaged in an “uncompromising search for truth and justice,” and the university as it exists today demands too many compromises.

Magisterium is a Catholic term for a teaching authority. One Magisterium refers to the book’s ambition of pointing to a singular teaching authority, a new one heretofore unrecognized by other teaching authorities such as mainstream universities. Hence the book is an attack on other sources of intellectual authority. An example passage:

The devastating news for any reader venturing a toe into the stormy waters of this book is that its writer’s view is that we may never be able to dignify the moral, epistemological and political miasma of the early twenty-first century with terms like “crisis” for which the appropriate solution is of course a “paradigm shift”. It may simply be a set of hideously interconnected messes; epistemological and administrative in the academy, institutional and moral in the greater society. As a consequence, the landscape of possible “solutions” may seem so unconstrained that the wisdom of Joe the barman may be seen to equal that of any series of tomes, no matter how well-researched.

This book is above all an attempt to unify the plurality of discourses — scientific, religious, moral, aesthetic, and so on — that obtain at the start of the third millenium.

An anthropologist of science might observe that this criticality-of-everything, coupled with the claim to have a unifying theory of everything, is a surefire way to get ignored by the academy. The incentive structure of the academy requires specialization and a political balance of ideas. If somebody were to show up with the right idea, it would discredit a lot of otherwise important people and put others out of a job.

The problem, or one of them (there are many mentioned in the first chapter of One Magisterium, titled “The Trouble with Everything”), is that Ó Nualláin is right. At least as far as I can tell at this point. It is not an easy book to read; it is not structured linearly so much as (I imagine, not knowing what I’m talking about) like complex Irish dancing music, with motifs repeated and encircling themselves like a double helix or perhaps some more complex structure. Threaded together are topics from Quantum Mechanics, an analysis of the anthropic principle, a critique of Dawkins’ atheism and a positioning of the relevance of Vedanta theology to understanding physical reality, and an account of the proper role of the arts in society. I suspect that the book is meant to unfold on ones psychology slowly, resulting in ones adoption of what Ó Nualláin calls bionoetics, the new united worldview that is the alleged solution to everything.

A key principle of bionoetics is the recognition of what Ó Nualláin calls the “noetic” level of description, which is distinct from the “cognitive” third-person stance in that it is compressed in a way that makes it relevant to action in any particular domain of inquiry. Most of what he describes as “noetic” I read as “phenomenological”. I wonder if Ó Nualláin has read Merleau-Ponty–he uses the Husserlian critique of “psychologism” extensively.

I think it’s immaterial whether “noetic” is an appropriate neologism for this blending of the first-personal experience into the magisterium. Indeed, there is something comforting to a hard-headed scientist about Ó Nualláin’s views: contrary to the contemporary anthropological view, this first-personal knowledge has no place in academic science; it’s place is art. Having been in enough seminars at the School of Information where anthropologists lament not being taken seriously as producing knowledge comparable to that of the Scientists, and being one who appreciates the value of Art without needing it to be Science, I find something intuitively appealing about this view. Nevertheless, one wonders if the epistemic foundation of Ó Nualláin’s critique of the academy is grounded in scientific inquiry or his own and others first-personal noetic experiences coupled with observations of who is “successful” in scientific fields.

Just one chapter into One Magisterium, I have to say I’m impressed with it in a very specific way. Some of us learn about the world with a synthetic mind, searching for the truth with as few constraints on ones inquiry as possible. Indeed, that’s how I wound up at as nebulous place as the School of Information at Berkeley. As one conducts the search, one finds oneself increasingly isolated. Some truths may never be spoken, and it’s never appropriate to say all the truths at once. This is especially true in an academic context, where it is paramount for the reputation of the institution that everyone avoid intellectual embarrassment whenever possible. So we make compromises, contenting ourselves with minute and politically palatable expertise.

I am deeply impressed that Ó Nualláin has decided to fuck all and tell it like it is.