Digifesto

Tag: liberalism

We need a theory of collective agency to guide data intermediary design

Last week Jake Goldenfein and I presented some work-in-progress to the Centre for Artificial Intelligence and Digital Ethics (CAIDE) at the University of Melbourne. The title of the event was “Data science and the need for collective law and ethics”; perhaps masked by that title is the shift we’re taking to dive into the problem of data intermediaries. I wanted to write a bit about how we’re thinking about these issues.

This work builds on our work “Data Science and the Decline of Liberal Law and Ethics“, which was accepted by a conference that was then canceled due to COVID-19. In retrospect, it’s perhaps for the best that the conference was canceled. The “decline of liberalism” theme fit the political moment when we wrote the piece, when Trump and Sanders were contenders for the presidency of the U.S, and authoritarian regimes appeared to be providing a new paradigm for governance. Now, Biden is the victor and it doesn’t look like liberalism is going anywhere. We must suppose that our project will take place in a (neo)liberal context.

Our argument in that work was that many of the ideas animating the (especially Anglophone) liberalism we see in the U.S., the U.K., and Australia legal systems have been inadequate to meaningfully regulate artificial intelligence. This is because liberalism imagines a society of rational individuals appropriating private property through exchanges on a public market and acting autonomously, whereas today we have a wide range of agents with varying levels of bounded rationality, many of which are “artificial” in Herbert Simon’s sense of being computer-enabled firms, tied together in networks of control, not least of these being privately owned markets (the platforms). Essentially, loopholes in liberalism have allowed a quite different form of sociotechnical ordering to emerge because that political theory did not take into account a number of rather recently discovered scientific truths about information, computing, and control. Our project is to tackle this disconnect between theory and actuality, and to try to discover what’s next in terms of a properly cybernetic political theory that advances the goal of human emancipation.

Picking up where our first paper left off, this has gotten us looking at data intermediaries. This is an area where there has been a lot of work! We were particularly inspired by Mozilla’s Data Futures review of different forms of data intermediary institutions, including data coops, data trusts, data marketplaces, and so on. There is a wide range of ongoing experiments with alternative forms of “data stewardship” or “data governance”.

Our approach has been to try to frame and narrow down the options based on normative principles, legal options, and technical expertise. Rather than asking empirically what forms of data governance have been attempted, we are wondering: what ought the goals of a data intermediary be, given the facts about cybernetic agency in the world we live? How could such an institution accomplish what has been lost by the inadequacies of liberalism?

Our thinking has led us to the position that what has prevented liberalism from regulating the digital economy is its emphasis on individual autonomy. We draw on the new consensus in privacy scholarship that individual “notice and choice” is an ineffective way to guarantee consumer protection in the digital economy. Not only are bounded rationality constraints on consumers preventing them from understanding what they are agreeing to, but also the ability of firms to control consumer’s choice architecture has dwarfed the meaningfulness of whatever rationality individuals do have. Meanwhile, it is now well understood (perhaps most recently by Pistor (2020)) that personal data is valuable only when it is cleaned and aggregated. This makes the locus of economic agency around personal data necessarily a collective one.

This line of inquiry leads us to a deep question to which we do not yet have a ready answer, which is “What is collective emancipation in the paradigm of control?” Meaning, given what we know about the “sciences of the artificial”, control theory, theory of computation and information, etc., with all of its challenges to the historical idea of the autonomous liberal agent, what does it mean for a collective of individuals to be free and autonomous?

We got a lot of good feedback on our talk, especially from discussant Seth Lazar, who pointed out that there are many communitarian strands of liberalism that we could look to for normative guides. He mentioned, for example, Elizabeth Anderson’s relational egalitarianism. We asked Seth whether he thought that the kind of institution that guaranteed the collective autonomy of its members would have to be a state, and he pointed out that that was a question of whether or not such a system would be entitled to use coercion.

There’s a lot to do on this project. While it is quite heady and philosophical, I do not think that it is necessarily only an abstract or speculative project. In a recent presentation by Vincent Southerland, he proposed that one solution to the problematic use of algorithms in criminal sentencing would be if “the community” of those advocating for equity in the criminal justice system operated their own automated decision systems. This raises an important question: how could and should a community govern its own a technical systems, in order to support what in Southerland’s case is an abolitionist agenda. I see this as a very aligned project.

There is also a technical component to the problem. Because of economies of scale and the legal climate, more and more computation is moving onto proprietary cloud systems. Most software now is provided “as a service”. It’s unclear what this means for organizations that would try to engage in self-governance, even when these organizations are autonomous state entities such as municipalities. In some conversations, we have considered what modifications of the technical ideas of the “user agent”, security firewalls and local networks, and hybrid cloud infrastructure would enable collective self-governance. This is the pragmatic “how?” that follows our normative “what?” and “why?” question but it is no less important to implementing a prototype solution.

References

Benthall, Sebastian and Goldenfein, Jake, Data Science and the Decline of Liberal Law and Ethics (June 22, 2020). Available at SSRN: https://ssrn.com/abstract=3632577 or http://dx.doi.org/10.2139/ssrn.3632577

Narayanan, A., Toubiana, V., Barocas, S., Nissenbaum, H., & Boneh, D. (2012). A critical look at decentralized personal data architectures. arXiv preprint arXiv:1202.4503.

Pistor, K. (2020). Rule by data: The end of markets?. Law & Contemp. Probs.83, 101.

Recap

Sometimes traffic on this blog draws attention to an old post from years ago. This can be a reminder that I’ve been repeating myself, encountering the same themes over and over again. This is not necessarily a bad thing, because I hope to one day compile the ideas from this blog into a book. It’s nice to see what points keep resurfacing.

One of these points is that liberalism assumes equality, but this challenged by society’s need for control structures, which creates inequality, which then undermines liberalism. This post calls in Charles Taylor (writing about Hegel!) to make the point. This post makes the point more succinctly. I’ve been drawing on Beniger for the ‘society needs control to manage its own integration’ thesis. I’ve pointed to the term managerialism as referring to an alternative to liberalism based on the acknowledgement of this need for control structures. Managerialism looks a lot like liberalism, it turns out, but it justifies things on different grounds and does not get so confused. As an alternative, more Bourdieusian view of the problem, I consider the relationship between capital, democracy, and oligarchy here. There are some useful names for what happens when managerialism goes wrong and people seem disconnected from each other–anomie–or from the control structures–alienation.

A related point I’ve made repeatedly is the tension between procedural legitimacy and getting people the substantive results that they want. That post about Hegel goes into this. But it comes up again in very recent work on antidiscrimination law and machine learning. What this amounts to is that attempts to come up with a fair, legitimate procedure are going to divide up the “pie” of resources, or be perceived to divide up the pie of resources, somehow, and people are going to be upset about it, however the pie is sliced.

A related theme that comes up frequently is mathematics. My contention is that effective control is a technical accomplishment that is mathematically optimized and constrained. There are mathematical results that reveal necessary trade-offs between values. Data science has been misunderstood as positivism when in fact it is a means of power. Technical knowledge and technology are forms of capital (Bourdieu again). Perhaps precisely because it is a rare form of capital, science is politically distrusted.

To put it succinctly: lack of mathematics education, due to lack of opportunity or mathophobia, lead to alienation and anomie in an economy of control. This is partly reflected in the chaotic disciplinarity of the social sciences, especially as they react to computational social science, at the intersection of social sciences, statistics, and computer science.

Lest this all seem like an argument for the mathematical certitude of totalitarianism, I have elsewhere considered and rejected this possibility of ‘instrumentality run amok‘. I’ve summarized these arguments here, though this appears to have left a number of people unconvinced. I’ve argued this further, and think there’s more to this story (a formalization of Scott’s arguments from Seeing Like a State, perhaps), but I must admit I don’t have a convincing solution to the “control problem” yet. However, it must be noted that the answer to the control problem is an empirical or scientific prediction, not a political inclination. Whether or not it is the most interesting or important question regarding technological control has been debated to a stalemate, as far as I can tell.

As I don’t believe singleton control is a likely or interesting scenario, I’m more interested in practical ways of offering legitimacy or resistance to control structures. I used to think the “right” political solution was a kind of “hacker class consciousness“; I don’t believe this any more. However, I still think there’s a lot to the idea of recursive publics as actually existing alternative power structures. Platform coops are interesting for the same reason.

All this leads me to admit my interest in the disruptive technology du jour, the blockchain.

Why managerialism: it’s tolerant and meritocratic

In my last post, I argued that we should take managerialism seriously as a political philosophy. A key idea in managerialism (as I’m trying to define it) is that it acknowledges that sociotechnical organizations are relevant units of political power, and is concerned with the relationship between these organizations. These organizations can be functionally specific. They can have hierarchical, non-democratic control in limited, not totalitarian ways. They check and balance each other, probably. Managerialism tends to think that organizations can be managed well, and that good management matters, politically.

This is as opposed to liberalism, which is grounded in rights of the individual, which then becomes a foundation for democracy. It’s also opposed to communitarianism, which holds the political unit of interest to be a family unit or other small community. I’m positioning managerialism as a more cybernetic political idea, as well as one more adapted to present economic conditions.

It may sound odd to hear somebody argue in favor of managerialism. I’ll admit that I am doing so tentatively, to see what works and what doesn’t. Given that a significant percentage of American political thought now is considering such baroque alternatives to liberalism as feudalism and ethnic tribalism, perhaps because liberalism everywhere has been hijacked by plutocracy, it may not be crazy to discuss alternatives.

One reason why somebody might be attracted to managerialism is that it is (I’d argue) essentially tolerant and meritocratic. Sociotechnical organizations that are organized efficiently to perform their main function need not make a lot of demands of their members besides whatever protocols are necessary for the functioning of the whole. In many cases, this should lead to a basic indifference to race, gender, and class background, from the internal perspective of the organization. As there’s good research indicating that diversity leads to greater collective intelligence in organizations, there’s a good case for tolerant policies in managerial institutions. Merit, defined relative to the needs of the particular organization, would be the privileged personal characteristic here.

I’d like to distinguish managerialism from technocracy in the following sense, which may be a matter of my own terminological invention. Technocracy is the belief that experts should run the state. It offers an expansion of centralized power. Managerialism is, I want to argue, not compatible with centralized state control. Rather, it recognizes many different spheres of life that nevertheless need to be organized to be effective. These spheres or sectors will be individually managed, perhaps by competing organizations, but regulate each other more than they require central regulation.

The way these organizations can regulate each other is Exit, in Hirschman’s sense. While the ideas of Exit, Loyalty, and Voice are most commonly used to discuss how individuals can affect the organizations they are a part of, similar ideas can function at higher scales of analysis, as organizations interact with each other. Think about international trade agreements, and sanctions.

The main reason to support managerialism is not that it is particularly just or elegant. It’s that it is more or less the case that the political structures in place now are some assemblage of sociotechnical organizations interacting with each other. Those people who have power are those with power within one or more of these organizations. And to whatever extent there is a shared ideological commitment among people, it is likely because a sociotechnical organization has been turned to the effect of spreading that ideology. This is a somewhat abstract way of saying what lots of people say in a straightforward way all the time: that certain media institutions are used to propagate certain ideologies. This managerialist framing is just intended to abstract away from the particulars in order to develop a political theory.