Notes about “Data Science and the Decline of Liberal Law and Ethics”
Jake Goldenfein and I have put up on SSRN our paper, “Data Science and the Decline of Liberal Law and Ethics”. I’ve mentioned it on this blog before as something I’m excited about. It’s also been several months since we’ve finalized it, and I wanted to quickly jot some notes about it based on considerations going into it and since then.
The paper was the result of a long and engaged collaboration with Jake which started from a somewhat different place. We considered the question, “What is sociopolitical emancipation in the paradigm of control?” That was a mouthful, but it captured what we were going for:
- Like a lot of people today, we are interested in the political project of freedom. Not just freedom in narrow, libertarian senses that have proven to be self-defeating, but in broader senses of removing social barriers and systems of oppression. We were ambivalent about the form that would take, but figured it was a positive project almost anybody would be on board with. We called this project emancipation.
- Unlike a certain prominent brand of critique, we did not begin from an anthropological rejection of the realism of foundational mathematical theory from STEM and its application to human behavior. In this paper, we did not make the common move of suggesting that the source of our ethical problems is one that can be solved by insisting on the terminology or methodological assumptions of some other discipline. Rather, we took advances in, e.g., AI as real scientific accomplishments that are telling us how the world works. We called this scientific view of the world the paradigm of control, due to its roots in cybernetics.
I believe our work is making a significant contribution to the “ethics of data science” debate because it is quite rare to encounter work that is engaged with both project. It’s common to see STEM work with no serious moral commitments or valence. And it’s common to see the delegation of what we would call emancipatory work to anthropological and humanistic disciplines: the STS folks, the media studies people, even critical X (race, gender, etc.) studies. I’ve discussed the limitations of this approach, however well-intentioned, elsewhere. Often, these disciplines argue that the “unethical” aspect of STEM is because of their methods, discourses, etc. To analyze things in terms of their technical and economic properties is to lose the essence of ethics, which is aligned with anthropological methods that are grounded in respectful, phenomenological engagement with their subjects.
This division of labor between STEM and anthropology has, in my view (I won’t speak for Jake) made it impossible to discuss ethical problems that fit uneasily in either field. We tried to get at these. The ethical problem is instrumentality run amok because of the runaway economic incentives of private firms combined with their expanded cognitive powers as firms, a la Herbert Simon.
This is not a terribly original point and we hope it is not, ultimately, a fringe political position either. If Martin Wolf can write for the Financial Times that there is something threatening to democracy about “the shift towards the maximisation of shareholder value as the sole goal of companies and the associated tendency to reward management by reference to the price of stocks,” so can we, and without fear that we will be targeted in the next red scare.
So what we are trying to add is this: there is a cognitivist explanation for why firms can become so enormously powerful relative to individual “natural persons”, one that is entirely consistent with the STEM foundations that have become dominant in places like, most notably, UC Berkeley (for example) as “data science”. And, we want to point out, the consequences of that knowledge, which we take to be scientific, runs counter to the liberal paradigm of law and ethics. This paradigm, grounded in individual autonomy and privacy, is largely the paradigm animating anthropological ethics! So we are, a bit obliquely, explaining why the the data science ethics discourse has gelled in the ways that it has.
We are not satisfied with the current state of ‘data science ethics’ because to the extent that they cling to liberalism, we fear that they miss and even obscure the point, which can best be understood in a different paradigm.
We left as unfinished the hard work of figuring out what the new, alternative ethical paradigm that took cognitivism, statistics, and so on seriously would look like. There are many reasons beyond the conference publication page limit why we were unable to complete the project. The first of these is that, as I’ve been saying, it’s terribly hard to convince anybody that this is a project worth working on in the first place. Why? My view of this may be too cynical, but my explanations are that either (a) this is an interdisciplinary third rail because it upsets the balance of power between different academic departments, or (b) this is an ideological third rail because it successfully identifies a contradiction in the current sociotechnical order in a way that no individual is incentivized to recognize, because that order incentivizes individuals to disperse criticism of its core institutional logic of corporate agency, or (c) it is so hard for any individual to conceive of corporate cognition because of how it exceeds the capacity of human understanding that speaking in this way sounds utterly speculative to a lot of fo people. The problem is that it requires attributing cognitive and adaptive powers to social forms, and a successful science of social forms is, at best, in the somewhat gnostic domain of complex systems research.
The latter are rarely engaged in technology policy but I think it’s the frontier.
Benthall, Sebastian and Goldenfein, Jake, Data Science and the Decline of Liberal Law and Ethics (June 22, 2020). Ethics of Data Science Conference – Sydney 2020 (forthcoming). Available at SSRN: https://ssrn.com/abstract=