Digifesto

Tag: algorithmic law

Open Source Software (OSS) and regulation by algorithms

It has long been argued that technology, especially built infrastructure, has political implications (Winner, 1980). With the rise of the Internet as the dominating form of technological infrastructure, Lessig (1999), among others, argued that software code is a regulating force parallel to the law. By extension of this argument, we would expect open source software to be a regulating force in society.

This is not the case. There is a lot of open source software and much of it is very important. But there’s no evidence to say that open source software, in and of itself, regulates society except in the narrow sense in that the communities that build and maintain it are necessarily constrained by its technical properties.

On the other hand, there are countless platforms and institutions that deploy open source software as part of their activity, which does have a regulating force on society. The Big Tech companies that are so powerful that they seem to rival some states are largely built on an “open core” of software. Likewise for smaller organizations. OSS is simply part of the the contemporary software production process, and it is ubiquitous.

Most widely used programming languages are open source. Perhaps a good analogy for OSS is that it is a collection of languages, and literatures in those languages. These languages and much of their literature are effectively in the public domain. We might say the same thing about English or Chinese or German or Hindi.

Law, as we know it in the modern state, is a particular expression of language with purposeful meaning. It represents, at its best, a kind of institutional agreement that constraints behavior based on its repetition and appeals to its internal logic. The Rule of Law, as we know it, depends on the social reproduction of this linguistic community. Law Schools are the main means of socializing new lawyers, who are then credentialed to participate in and maintain the system which regulates society. Lawyers are typically good with words, and their practice is in a sense constrained by their language, but only in the widest of Sapir-Whorf senses. Law is constrained more the question of which language is institutionally recognized; indeed, those words and phrases that have been institutionally ratified are the law.

Let’s consider again the generative question of whether law could be written in software code. I will leave aside for a moment whether or not this would be desirable. I will entertain the idea in part because I believe that it is inevitable, because of how the algorithm is the form of modern rationality (Totaro and Ninno, 2014) and the evolutionary power of rationality.

A law written in software would need to be written in a programming language and this would all but entail that it is written on an “open core” of software. Concretely: one might write laws in Python.

The specific code in the software law might or might not be open. There might one day be a secretive authoritarian state with software laws that are not transparent or widely known. Nothing rules that out.

We could imagine a more democratic outcome as well. It would be natural, in a more liberal kind of state, for the software laws to be open on principle. The definitions here become a bit tricky: the designation of “open source software” is one within the schema of copyright and licensing. Could copyright laws and license be written in software? In other words, could the ‘openness’ of the software laws be guaranteed by their own form? This is an interesting puzzle for computer scientists and logicians.

For the sake of argument, suppose that something like this is accomplished. Perhaps it is accomplished merely by tradition: the institution that ratifies software laws publishes these on purpose, in order to facilitate healthy democratic debate about the law.

Even with all this in place, we still don’t have regulation. We have now discussed software legislation, but not software execution and enforcement. If software is only as powerful as the expression of a language. A deployed system, running that software, is an active force in the world. Such a system implicates a great many things beyond the software itself. It requires computers and and networking infrastructure. It requires databases full of data specific to the applications for which its ran.

The software dictates the internal logic by which a system operates. But that logic is only meaningful when coupled with an external societal situation. The membrane between the technical system and the society in which it participates is of fundamental importance to understanding the possibility of technical regulation, just as the membrane between the Rule of Law and society–which we might say includes elections and the courts in the U.S.–is of fundamental importance to understanding the possibility of linguistic regulation.

References

Lessig, L. (1999). Code is law. The Industry Standard18.

Hildebrandt, M. (2015). Smart technologies and the end (s) of law: Novel entanglements of law and technology. Edward Elgar Publishing.

Totaro, P., & Ninno, D. (2014). The concept of algorithm as an interpretative key of modern rationality. Theory, Culture & Society31(4), 29-49.

Winner, L. (1980). Do artifacts have politics?. Daedalus, 121-136.

mathematical discourse vs. exit; blockchain applications

Continuing my effort to tie together the work on this blog into a single theory, I should address the theme of an old post that I’d forgotten about.

The post discusses the discourse theory of law, attributed to the later, matured Habermas. According to it, the law serves as a transmission belt between legitimate norms established by civil society and a system of power, money, and technology. When it is efficacious and legitimate, society prospers in its legitimacy. The blog post toys with the idea of normatively aligned algorithm law established in a similar way: through the norms established by civil society.

I wrote about this in 2014 and I’m surprised to find myself revisiting these themes in my work today on privacy by design.

What this requires, however, is that civil society must be able to engage in mathematical discourse, or mathematized discussion of norms. In other words, there has to be an intersection of civil society and science for this to make sense. I’m reminded by how inspired I’ve felt by Nick Doty’s work on multistakerholderism in Internet standards as a model.

I am more skeptical of this model than I have been before, if only because in the short term I’m unsure if a critical mass of scientific talent can engage with civil society well enough to change the law. This is because scientific talent is a form of capital which has no clear incentive for self-regulation. Relatedly, I’m no longer as confident that civil society carries enough clout to change policy. I must consider other options.

The other option, besides voicing ones concerns in civil society, is, of course, exit, in Hirschmann‘s sense. Theoretically an autonomous algorithmic law could be designed such that it encourages exit from other systems into itself. Or, more ecologically, competing autonomous (or decentralized, …) systems can be regulated by an exit mechanism. This is in fact what happens now with blockchain technology and cryptocurrency. Whenever there is a major failure of one of these currencies, there is a fork.

The FTC and pragmatism; Hoofnagle and Holmes

I’ve started working my way through Chris Hoofnagle’s Federal Trade Commission Privacy Law and Policy. Where I’m situated at the I School, there’s a lot of representation and discussion of the FTC in part because of Hoofnagle’s presence there. I find all this tremendously interesting but a bit difficult to get a grip on, as I have only peripheral experiences of actually existing governance. Instead I’m looking at things with a technical background and what can probably be described as overdeveloped political theory baggage.

So a clearly written and knowledgeable account of the history and contemporary practice of the FTC is exactly what I need to read, I figure.

With the poor judgment of commenting on the book having just cracked it open, I can say that the book reads so far as, not surprisingly, a favorable account of the FTC and its role in privacy law. In broad strokes, I’d say Hoofnagle’s narrative is that while the FTC started out as a compromise between politicians with many different positions on trade regulation, and while its had at times “mediocre” leadership, now the FTC is run by selfless, competent experts with the appropriate balance of economic savvy and empathy for consumers.

I can’t say I have any reason to disagree. I’m not reading for either a critique or an endorsement of the agency. I’m reading with my own idiosyncratic interests in mind: algorithmic law and pragmatist legal theory, and the relationship between intellectual property and antitrust. I’m also learning (through reading) how involved the FTC has been in regulating advertising, which endears me to the adjacency because I find most advertising annoying.

Missing as I am any substantial knowledge of 20th century legal history, I’m intrigued by resonances between Hoofnagle’s account of the FTC and Oliver Wendell Holmes Jr.’s “The Path of the Law“, which I mentioned earlier. Apparently there’s some tension around the FTC as some critics would like to limit its powers by holding it more narrowly accountable to common law, as oppose to (if I’m getting this right) a more broadly scoped administrative law that, among other things, allows it to employ skilled economist and technologists. As somebody who has been intellectually very informed by American pragmatism, I’m pleased to notice that Holmes himself would have probably approved of the current state of the FTC:

At present, in very many cases, if we want to know why a rule of law has taken its particular shape, and more or less if we want to know why it exists at all, we go to tradition. We follow it into the Year Books, and perhaps beyond them to the customs of the Salian Franks, and somewhere in the past, in the German forests, in the needs of Norman kings, in the assumptions of a dominant class, in the absence of generalized ideas, we find out the practical motive for what now best is justified by the mere fact of its acceptance and that men are accustomed to it. The rational study of law is still to a large extent the study of history. History must be a part of the study, because without it we cannot know the precise scope of rules which it is our business to know. It is a part of the rational study, because it is the first step toward an enlightened scepticism, that is, towards a deliberate reconsideration of the worth of those rules. When you get the dragon out of his cave on to the plain and in the daylight, you can count his teeth and claws, and see just what is his strength. But to get him out is only the first step. The next is either to kill him, or to tame him and make him a useful animal. For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics. It is revolting to have no better reason for a rule of law than that so it was laid down in the time of Henry IV. It is still more revolting if the grounds upon which it was laid down have vanished long since, and the rule simply persists from blind imitation of the past. (Holmes, 1897)

These are strong words from a Supreme Court justice about the limitations of common law! It’s also a wholehearted endorsement of quantified science as the basis for legal rules. Perhaps what Holmes would have preferred is a world in which statistics and economics themselves became part of the logic of law. However, he goes to pains to point out how often legal judgment itself does not depend on logic so much as the unconscious biases of judges and juries, especially with respect to questions of “social advantage”:

I think that the judges themselves have failed adequately to recognize their duty of weighing considerations of social advantage. The duty is inevitable, and the result of the often proclaimed judicial aversion to deal with such considerations is simply to leave the very ground and foundation of judgments inarticulate, and often unconscious, as I have said. When socialism first began to be talked about, the comfortable classes of the community were a good deal frightened. I suspect that this fear has influenced judicial action both here and in England, yet it is certain that it is not a conscious factor in the decisions to which I refer. I think that something similar has led people who no longer hope to control the legislatures to look to the courts as expounders of the constitutions, and that in some courts new principles have been discovered outside the bodies of those instruments, which may be generalized into acceptance of the economic doctrines which prevailed about fifty years ago, and a wholesale prohibition of what a tribunal of lawyers does not think about right. I cannot but believe that if the training of lawyers led them habitually to consider more definitely and explicitly the social advantage on which the rule they lay down must be justified, they sometimes would hesitate where now they are confident, and see that really they were taking sides upon debatable and often burning questions.

What I find interesting about this essay is that it somehow endorses both the use of economics and statistics in advancing legal thinking and also endorses what has become critical legal theory, with its specific consciousness of the role of social power relations in law. So often in contemporary academic discourse, especially when it comes to discussion of regulation technology businesses, these approaches to law are considered opposed. Perhaps it’s appropriate to call a more politically centered position, if there were one today, a pragmatist position.

Perhaps quixotically, I’m very interested in the limits of these arguments and their foundation in legal scholarship because I’m wondering to what extent computational logic can become a first class legal logic. Holmes’s essay is very concerned with the limitations of legal logic:

The fallacy to which I refer is the notion that the only force at work in the development of the law is logic. In the broadest sense, indeed, that notion would be true. The postulate on which we think about the universe is that there is a fixed quantitative relation between every phenomenon and its antecedents and consequents. If there is such a thing as a phenomenon without these fixed quantitative relations, it is a miracle. It is outside the law of cause and effect, and as such transcends our power of thought, or at least is something to or from which we cannot reason. The condition of our thinking about the universe is that it is capable of being thought about rationally, or, in other words, that every part of it is effect and cause in the same sense in which those parts are with which we are most familiar. So in the broadest sense it is true that the law is a logical development, like everything else. The danger of which I speak is not the admission that the principles governing other phenomena also govern the law, but the notion that a given system, ours, for instance, can be worked out like mathematics from some general axioms of conduct. This is the natural error of the schools, but it is not confined to them. I once heard a very eminent judge say that he never let a decision go until he was absolutely sure that it was right. So judicial dissent often is blamed, as if it meant simply that one side or the other were not doing their sums right, and if they would take more trouble, agreement inevitably would come.

This mode of thinking is entirely natural. The training of lawyers is a training in logic. The processes of analogy, discrimination, and deduction are those in which they are most at home. The language of judicial decision is mainly the language of logic. And the logical method and form flatter that longing for certainty and for repose which is in every human mind. But certainty generally is illusion, and repose is not the destiny of man. Behind the logical form lies a judgment as to the relative worth and importance of competing legislative grounds, often an inarticulate and unconscious judgment, it is true, and yet the very root and nerve of the whole proceeding. You can give any conclusion a logical form. You always can imply a condition in a contract. But why do you imply it? It is because of some belief as to the practice of the community or of a class, or because of some opinion as to policy, or, in short, because of some attitude of yours upon a matter not capable of exact quantitative measurement, and therefore not capable of founding exact logical conclusions. Such matters really are battle grounds where the means do not exist for the determinations that shall be good for all time, and where the decision can do no more than embody the preference of a given body in a given time and place. We do not realize how large a part of our law is open to reconsideration upon a slight change in the habit of the public mind. No concrete proposition is self evident, no matter how ready we may be to accept it, not even Mr. Herbert Spencer’s “Every man has a right to do what he wills, provided he interferes not with a like right on the part of his neighbors.”

For Holmes, nature can be understood through a mathematized physics and is in this sense logical. But the law itself is not logical in the narrow sense of providing certainty about concrete propositions and the legal interpretation of events.

I wonder whether the development of more flexible probabilistic logics, such as those that inform contemporary machine learning techniques, would have for Holmes adequately bridged the gap between the logic of nature and the ambiguity of law. These probabilistic logics are designed to allow for precise quantification of uncertainty and ambiguity.

This is not a purely academic question. I’m thinking concretely about applications to regulation. Some of this has already been implemented. I’m thinking about Datta, Tschantz, and Datta’s “Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination” (pdf). I know several other discrimination auditing tools have been developed by computer science researchers. What is the legal status of these tools? Could they or should they be implemented as a scalable or real-time autonomous system?

I was talking to an engineer friend the other day and he was telling me that internally to Google, there’s a team responsible for building the automated system that tests all of its other automated systems to make sure that it is adherence to its own internal privacy standards. This was a comforting thing to hear and not a surprise, as I get the sense from conversations I’ve had with Googler’s that they are in general a very ethically conscientious company. What’s distressing to me is that Google may have more powerful techniques available for self-monitoring than the government has for regulation. This is because (I think…again my knowledge of these matters is actually quite limited) at Google they know when a well-engineered computing system is going to perform better than a team of clerks, and so developing this sort of system is considered worthy of investment. It will be internally trusted as much as any other internal expertise. Whereas in the court system, institutional inertia and dependency on discursive law mean that at best this sort of system can be brought in as an expensive and not entirely trusted external source.

What I’d like to figure out is to what extent agency law in particular is flexible enough to be extended to algorithmic law.

algorithmic law and pragmatist legal theory: Oliver Wendell Holmes Jr. “The Path of the Law”

Several months ago I was taken by the idea that in the future (and maybe depending on how you think about it, already in the present) laws should be written as computer algorithms. While the idea that “code is law” and that technology regulates is by no means original, what I thought perhaps provocative is the positive case for the (re-)implementation of the fundamental laws of the city or state in software code.

The argument went roughly like this:

  • Effective law must control a complex society
  • Effective control requires social and political prediciton.
  • Unassisted humans are not good at social and political prediction. For this conclusion I drew heavily on Philip Tetlock’s work in Expert Political Judgment.
  • Therefore laws, in order to keep pace with the complexity of society, should be implemented as technical systems capable of bringing data and machine learning to bear on social control.

Science fiction is full of both dystopias and utopias in which society is literally controlled by a giant, intelligent machine. Avoiding either extreme, I just want to make the modest point that there may be scalability problems with law and regulation based on discourse in natural language. To some extent the failure of the state to provide sophisticated, personalized regulation in society has created myriad opportunities for businesses to fill these roles. Now there’s anxiety about the relationship between these businesses and the state as they compete for social regulation. To the extent that businesses are less legitimate rulers of society than the state, it seems a practical, technical necessity that the state adopt the same efficient technologies for regulation that businesses have. To do otherwise is to become obsolete.

There are lots of reasons to object to this position. I’m interested in hearing yours and hope you will comment on this and future blog posts or otherwise contact me with your considered thoughts on the matter. To me the strongest objection is that the whole point of the law is that it is based on precedent, and so any claim about the future trajectory of the law has to be based on past thinking about the law. Since I am not a lawyer and I know precious little about the law, you shouldn’t listen to my argument because I don’t know what I’m talking about. Q.E.D.

My counterargument to this is that there’s lots of academics who opine about things they don’t have particular expertise in. One way to get away with this is by deferring to somebody else who has credibility in field of interest. This is just one of several reasons why I’ve been reading “The Path of the Law“, a classic essay about pragmatist legal theory written by Supreme Court Justice Oliver Wendell Holmes Jr. in 1897.

One of the key points of this essay is that it is a mistake to consider the study of law the study of morality per se. Rather, the study of law is the attempt to predict the decisions that courts will make in the future, based on the decisions courts will make in the past. What courts actually decide is based in part of legal precedent but also on the unconsciously inclinations of judges and juries. In ambiguous cases, different legal framings of the same facts will be in competition, and the judgment will give weight to one interpretation or another. Perhaps the judge will attempt to reconcile these differences into a single, logically consistent code.

I’d like to take up the arguments of this essay again in later blog posts, but for now I want to focus on the concept of legal study as prediction. I think this demands focus because while Holmes, like most American pragmatists, had a thorough and nuanced understanding of what prediction is, our mathematical understanding of prediction has come a long way since 1897. Indeed, it is a direct consequence of these formalizations and implementations of predictive systems that we today see so much tacit social regulation performed by algorithms. We know now that effective prediction depends on access to data and the computational power to process it according to well-known algorithms. These algorithms can optimize themselves to such a degree that their specific operations are seemingly beyond the comprehension of the people affected by them. Some lawyers have argued that this complexity should not be allowed to exist.

What I am pointing to is a fundamental tension between the requirement that practitioners of the law be able to predict legal outcomes, and the fact that the logic of the most powerful predictive engines today is written in software code not words. This is because of physical properties of computation and prediction that are not likely to ever change. And since a powerful predictive engine can just as easily use its power to be strategically unpredictable, this presents an existential challenge to the law. It may simply be impossible for lawyers acting as human lawyers have for hundreds of years to effectively predict and therefor regulate powerful computational systems.

One could argue that this means that such powerful computational systems should simply be outlawed. Indeed this is the thrust of certain lawyers. But if we believe that these systems are not going to go away, perhaps because they won’t allow us to regulate them out of existence, then our only viable alternative to suffering under their lawless control is to develop a competing system of computational legalism with the legitimacy of the state.

a constitution written in source code

Suppose we put aside any apocalyptic fears of instrumentality run amok, make peace between The Two Cultures of science and the humanities, and suffer gracefully the provocations of the critical without it getting us down.

We are left with some bare facts:

  • The size and therefore the complexity of society is increasing all the time.
  • Managing that complexity requires information technology, and specifically technology for computation and its various interfaces.
  • The information processing already being performed by computers in the regulation and control of society dwarfs anything any individual can accomplish.
  • While we maintain the myth of human expertise and human leadership, these are competitive only when assisted to a great degree by a thinking machine.
  • Political decisions, in particular, either are already or should be made with the assistance of data processing tools commensurate with the scale of the decisions being made.

This is a description of the present. To extrapolate into the future, there is only a thin consensus of anthropocentrism between us and the conclusion that we do not so much govern machines as much as they govern us.

This should not shock us. The infrastructure that provides us so much guidance and potential in our daily lives–railroads, electrical wires, wifi hotspots, satellites–is all of human design and made in service of human interests. While these design processes were never entirely democratic, we have made it thus far with whatever injustices have occurred.

We no longer have the pretense that making governing decisions is the special domain of the human mind. Concerns about the possibly discriminatory power of algorithms concede this point. So public concern now scrutinizes the private companies whose software systems make so many decisions for us in ways that are obscure or unpredictable. The profit motive, it is suspected, will not serve customers of these services well in the long run.

So far policy-makers have taken a passive stance towards the problem of algorithmic control by reacting to violations of human dignity with a call for human regulation.

What is needed is a more active stance.

Suppose we were to start again in founding a new city. Or a new nation. Unlike the founders of every city ever founded, we have the option to write its founding Constitution in source code. It would be logically precise and executable without expensive bureaucratic apparatus. It would be scalable in ways that can be mathematically confirmed. It could be forked, experimented with, by diverse societies across the globe. Its procedure for amendment would be written into itself, securing democracy by protocol design.