Protected: I study privacy now

This content is password protected. To view it please enter your password below:

moved BigBang core repository to DATACTIVE organization

I made a small change this evening which I feel really, really good about.

I transferred the BigBang project from my personal GitHub account to the datactive organization.

I’m very grateful for DATACTIVE‘s interest in BigBang and am excited to turn over the project infrastructure to their stewardship.

becoming a #seriousacademic

I’ve decided to make a small change to my on-line identity.

For some time now, my Twitter account has been listed under a pseudonym, “Gnaeus Rafinesque”, and has had a picture of a cat. Today I’m changing it to my full name (“Sebastian Benthall”) and a picture of my face.

Gnaeus Rafinesque

Serious academic

I chose to use a pseudonym on Twitter for a number of reasons. One reason was that I was interested in participant observation in an Internet subculture, Weird Twitter, that generally didn’t use real names because most of their activity on Twitter was very silly.

But another reason was because I was afraid of being taken seriously myself. As a student, even a graduate student, I felt it was my job to experiment, fail, be silly, and test the limits of the media I was working (and playing) within. I learned a lot from this process.

Because I often would not intend to be taken seriously on Twitter, I was reluctant to have my tweets associated with my real name. I deliberately did not try to sever all ties between my Twitter account and my “real” identity, which is reflected elsewhere on the Internet (LinkedIn, GitHub, etc.) because…well, it would have been a lot of futile work. But I think using a pseudonym and a cat picture succeeded in signalling that I wasn’t putting the full weight of my identity, with the accountability entailed by that, into my tweets.

I’m now entering a different phase of my career. Probably the most significant marker of that phase change is that I am now working as a cybersecurity professional in addition to being a graduate student. I’m back in the working world and so in a sense back to reality.

Another marker is that I’ve realized that I’ve got serious things worth saying and paying attention to, and that projecting an inconsequential, silly attitude on Twitter was undermining my ability to say those things.

It’s a little scary shifting to my real name and face on Twitter. I’m likely to censor myself much more now. Perhaps that’s as it should be.

I wonder what other platforms are out there in which I could be more ridiculous.

The FTC and pragmatism; Hoofnagle and Holmes

I’ve started working my way through Chris Hoofnagle’s Federal Trade Commission Privacy Law and Policy. Where I’m situated at the I School, there’s a lot of representation and discussion of the FTC in part because of Hoofnagle’s presence there. I find all this tremendously interesting but a bit difficult to get a grip on, as I have only peripheral experiences of actually existing governance. Instead I’m looking at things with a technical background and what can probably be described as overdeveloped political theory baggage.

So a clearly written and knowledgeable account of the history and contemporary practice of the FTC is exactly what I need to read, I figure.

With the poor judgment of commenting on the book having just cracked it open, I can say that the book reads so far as, not surprisingly, a favorable account of the FTC and its role in privacy law. In broad strokes, I’d say Hoofnagle’s narrative is that while the FTC started out as a compromise between politicians with many different positions on trade regulation, and while its had at times “mediocre” leadership, now the FTC is run by selfless, competent experts with the appropriate balance of economic savvy and empathy for consumers.

I can’t say I have any reason to disagree. I’m not reading for either a critique or an endorsement of the agency. I’m reading with my own idiosyncratic interests in mind: algorithmic law and pragmatist legal theory, and the relationship between intellectual property and antitrust. I’m also learning (through reading) how involved the FTC has been in regulating advertising, which endears me to the adjacency because I find most advertising annoying.

Missing as I am any substantial knowledge of 20th century legal history, I’m intrigued by resonances between Hoofnagle’s account of the FTC and Oliver Wendell Holmes Jr.’s “The Path of the Law“, which I mentioned earlier. Apparently there’s some tension around the FTC as some critics would like to limit its powers by holding it more narrowly accountable to common law, as oppose to (if I’m getting this right) a more broadly scoped administrative law that, among other things, allows it to employ skilled economist and technologists. As somebody who has been intellectually very informed by American pragmatism, I’m pleased to notice that Holmes himself would have probably approved of the current state of the FTC:

At present, in very many cases, if we want to know why a rule of law has taken its particular shape, and more or less if we want to know why it exists at all, we go to tradition. We follow it into the Year Books, and perhaps beyond them to the customs of the Salian Franks, and somewhere in the past, in the German forests, in the needs of Norman kings, in the assumptions of a dominant class, in the absence of generalized ideas, we find out the practical motive for what now best is justified by the mere fact of its acceptance and that men are accustomed to it. The rational study of law is still to a large extent the study of history. History must be a part of the study, because without it we cannot know the precise scope of rules which it is our business to know. It is a part of the rational study, because it is the first step toward an enlightened scepticism, that is, towards a deliberate reconsideration of the worth of those rules. When you get the dragon out of his cave on to the plain and in the daylight, you can count his teeth and claws, and see just what is his strength. But to get him out is only the first step. The next is either to kill him, or to tame him and make him a useful animal. For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics. It is revolting to have no better reason for a rule of law than that so it was laid down in the time of Henry IV. It is still more revolting if the grounds upon which it was laid down have vanished long since, and the rule simply persists from blind imitation of the past. (Holmes, 1897)

These are strong words from a Supreme Court justice about the limitations of common law! It’s also a wholehearted endorsement of quantified science as the basis for legal rules. Perhaps what Holmes would have preferred is a world in which statistics and economics themselves became part of the logic of law. However, he goes to pains to point out how often legal judgment itself does not depend on logic so much as the unconscious biases of judges and juries, especially with respect to questions of “social advantage”:

I think that the judges themselves have failed adequately to recognize their duty of weighing considerations of social advantage. The duty is inevitable, and the result of the often proclaimed judicial aversion to deal with such considerations is simply to leave the very ground and foundation of judgments inarticulate, and often unconscious, as I have said. When socialism first began to be talked about, the comfortable classes of the community were a good deal frightened. I suspect that this fear has influenced judicial action both here and in England, yet it is certain that it is not a conscious factor in the decisions to which I refer. I think that something similar has led people who no longer hope to control the legislatures to look to the courts as expounders of the constitutions, and that in some courts new principles have been discovered outside the bodies of those instruments, which may be generalized into acceptance of the economic doctrines which prevailed about fifty years ago, and a wholesale prohibition of what a tribunal of lawyers does not think about right. I cannot but believe that if the training of lawyers led them habitually to consider more definitely and explicitly the social advantage on which the rule they lay down must be justified, they sometimes would hesitate where now they are confident, and see that really they were taking sides upon debatable and often burning questions.

What I find interesting about this essay is that it somehow endorses both the use of economics and statistics in advancing legal thinking and also endorses what has become critical legal theory, with its specific consciousness of the role of social power relations in law. So often in contemporary academic discourse, especially when it comes to discussion of regulation technology businesses, these approaches to law are considered opposed. Perhaps it’s appropriate to call a more politically centered position, if there were one today, a pragmatist position.

Perhaps quixotically, I’m very interested in the limits of these arguments and their foundation in legal scholarship because I’m wondering to what extent computational logic can become a first class legal logic. Holmes’s essay is very concerned with the limitations of legal logic:

The fallacy to which I refer is the notion that the only force at work in the development of the law is logic. In the broadest sense, indeed, that notion would be true. The postulate on which we think about the universe is that there is a fixed quantitative relation between every phenomenon and its antecedents and consequents. If there is such a thing as a phenomenon without these fixed quantitative relations, it is a miracle. It is outside the law of cause and effect, and as such transcends our power of thought, or at least is something to or from which we cannot reason. The condition of our thinking about the universe is that it is capable of being thought about rationally, or, in other words, that every part of it is effect and cause in the same sense in which those parts are with which we are most familiar. So in the broadest sense it is true that the law is a logical development, like everything else. The danger of which I speak is not the admission that the principles governing other phenomena also govern the law, but the notion that a given system, ours, for instance, can be worked out like mathematics from some general axioms of conduct. This is the natural error of the schools, but it is not confined to them. I once heard a very eminent judge say that he never let a decision go until he was absolutely sure that it was right. So judicial dissent often is blamed, as if it meant simply that one side or the other were not doing their sums right, and if they would take more trouble, agreement inevitably would come.

This mode of thinking is entirely natural. The training of lawyers is a training in logic. The processes of analogy, discrimination, and deduction are those in which they are most at home. The language of judicial decision is mainly the language of logic. And the logical method and form flatter that longing for certainty and for repose which is in every human mind. But certainty generally is illusion, and repose is not the destiny of man. Behind the logical form lies a judgment as to the relative worth and importance of competing legislative grounds, often an inarticulate and unconscious judgment, it is true, and yet the very root and nerve of the whole proceeding. You can give any conclusion a logical form. You always can imply a condition in a contract. But why do you imply it? It is because of some belief as to the practice of the community or of a class, or because of some opinion as to policy, or, in short, because of some attitude of yours upon a matter not capable of exact quantitative measurement, and therefore not capable of founding exact logical conclusions. Such matters really are battle grounds where the means do not exist for the determinations that shall be good for all time, and where the decision can do no more than embody the preference of a given body in a given time and place. We do not realize how large a part of our law is open to reconsideration upon a slight change in the habit of the public mind. No concrete proposition is self evident, no matter how ready we may be to accept it, not even Mr. Herbert Spencer’s “Every man has a right to do what he wills, provided he interferes not with a like right on the part of his neighbors.”

For Holmes, nature can be understood through a mathematized physics and is in this sense logical. But the law itself is not logical in the narrow sense of providing certainty about concrete propositions and the legal interpretation of events.

I wonder whether the development of more flexible probabilistic logics, such as those that inform contemporary machine learning techniques, would have for Holmes adequately bridged the gap between the logic of nature and the ambiguity of law. These probabilistic logics are designed to allow for precise quantification of uncertainty and ambiguity.

This is not a purely academic question. I’m thinking concretely about applications to regulation. Some of this has already been implemented. I’m thinking about Datta, Tschantz, and Datta’s “Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination” (pdf). I know several other discrimination auditing tools have been developed by computer science researchers. What is the legal status of these tools? Could they or should they be implemented as a scalable or real-time autonomous system?

I was talking to an engineer friend the other day and he was telling me that internally to Google, there’s a team responsible for building the automated system that tests all of its other automated systems to make sure that it is adherence to its own internal privacy standards. This was a comforting thing to hear and not a surprise, as I get the sense from conversations I’ve had with Googler’s that they are in general a very ethically conscientious company. What’s distressing to me is that Google may have more powerful techniques available for self-monitoring than the government has for regulation. This is because (I think…again my knowledge of these matters is actually quite limited) at Google they know when a well-engineered computing system is going to perform better than a team of clerks, and so developing this sort of system is considered worthy of investment. It will be internally trusted as much as any other internal expertise. Whereas in the court system, institutional inertia and dependency on discursive law mean that at best this sort of system can be brought in as an expensive and not entirely trusted external source.

What I’d like to figure out is to what extent agency law in particular is flexible enough to be extended to algorithmic law.

algorithmic law and pragmatist legal theory: Oliver Wendell Holmes Jr. “The Path of the Law”

Several months ago I was taken by the idea that in the future (and maybe depending on how you think about it, already in the present) laws should be written as computer algorithms. While the idea that “code is law” and that technology regulates is by no means original, what I thought perhaps provocative is the positive case for the (re-)implementation of the fundamental laws of the city or state in software code.

The argument went roughly like this:

  • Effective law must control a complex society
  • Effective control requires social and political prediciton.
  • Unassisted humans are not good at social and political prediction. For this conclusion I drew heavily on Philip Tetlock’s work in Expert Political Judgment.
  • Therefore laws, in order to keep pace with the complexity of society, should be implemented as technical systems capable of bringing data and machine learning to bear on social control.

Science fiction is full of both dystopias and utopias in which society is literally controlled by a giant, intelligent machine. Avoiding either extreme, I just want to make the modest point that there may be scalability problems with law and regulation based on discourse in natural language. To some extent the failure of the state to provide sophisticated, personalized regulation in society has created myriad opportunities for businesses to fill these roles. Now there’s anxiety about the relationship between these businesses and the state as they compete for social regulation. To the extent that businesses are less legitimate rulers of society than the state, it seems a practical, technical necessity that the state adopt the same efficient technologies for regulation that businesses have. To do otherwise is to become obsolete.

There are lots of reasons to object to this position. I’m interested in hearing yours and hope you will comment on this and future blog posts or otherwise contact me with your considered thoughts on the matter. To me the strongest objection is that the whole point of the law is that it is based on precedent, and so any claim about the future trajectory of the law has to be based on past thinking about the law. Since I am not a lawyer and I know precious little about the law, you shouldn’t listen to my argument because I don’t know what I’m talking about. Q.E.D.

My counterargument to this is that there’s lots of academics who opine about things they don’t have particular expertise in. One way to get away with this is by deferring to somebody else who has credibility in field of interest. This is just one of several reasons why I’ve been reading “The Path of the Law“, a classic essay about pragmatist legal theory written by Supreme Court Justice Oliver Wendell Holmes Jr. in 1897.

One of the key points of this essay is that it is a mistake to consider the study of law the study of morality per se. Rather, the study of law is the attempt to predict the decisions that courts will make in the future, based on the decisions courts will make in the past. What courts actually decide is based in part of legal precedent but also on the unconsciously inclinations of judges and juries. In ambiguous cases, different legal framings of the same facts will be in competition, and the judgment will give weight to one interpretation or another. Perhaps the judge will attempt to reconcile these differences into a single, logically consistent code.

I’d like to take up the arguments of this essay again in later blog posts, but for now I want to focus on the concept of legal study as prediction. I think this demands focus because while Holmes, like most American pragmatists, had a thorough and nuanced understanding of what prediction is, our mathematical understanding of prediction has come a long way since 1897. Indeed, it is a direct consequence of these formalizations and implementations of predictive systems that we today see so much tacit social regulation performed by algorithms. We know now that effective prediction depends on access to data and the computational power to process it according to well-known algorithms. These algorithms can optimize themselves to such a degree that their specific operations are seemingly beyond the comprehension of the people affected by them. Some lawyers have argued that this complexity should not be allowed to exist.

What I am pointing to is a fundamental tension between the requirement that practitioners of the law be able to predict legal outcomes, and the fact that the logic of the most powerful predictive engines today is written in software code not words. This is because of physical properties of computation and prediction that are not likely to ever change. And since a powerful predictive engine can just as easily use its power to be strategically unpredictable, this presents an existential challenge to the law. It may simply be impossible for lawyers acting as human lawyers have for hundreds of years to effectively predict and therefor regulate powerful computational systems.

One could argue that this means that such powerful computational systems should simply be outlawed. Indeed this is the thrust of certain lawyers. But if we believe that these systems are not going to go away, perhaps because they won’t allow us to regulate them out of existence, then our only viable alternative to suffering under their lawless control is to develop a competing system of computational legalism with the legitimacy of the state.

second-order cybernetics

The mathematical foundations of modern information technology are:

  • The logic of computation and complexity, developed by Turing, Church, and others. These mathematics specify the nature and limits of the algorithm.
  • The mathematics of probability and, by extension, information theory. These specify the conditions and limitations of inference from evidence, and the conditions and limits of communication.

Since the discovery of these mathematical truths and their myriad application, there have been those that have recognized that these truths apply both to physical objects, such as natural life and artificial technology, and also to lived experience, mental concepts, and social life. Humanity and nature obey the same discoverable, mathematical logic. This allowed for a vision of a unified science of communication and control: cybernetics.

There have been many intellectual resistance to these facts. One of the most cogent is Understanding Computers and Cognition, by Terry Winograd and Fernando Flores. Terry Winograd is the AI professor who advised the founders of Google. His credentials are beyond question. And so the fact that he coauthored a critique of “rationalist” artificial intelligence with Fernando Flores, Chilean entrepreneur, politician, and philosophy PhD , is significant. In this book, the two authors base their critique of AI on the work of Humberto Maturana, a second-order cyberneticist who believed that life’s organization and phenomenology could be explained by a resonance between organism and environment, structural coupling. Theories of artificial intelligence are incomplete when not embedded in a more comprehensive theory of the logic of life.

I’ve begun studying this logic, which was laid out by Francisco Varela in 1979. Notably, like the other cybernetic logics, it is an account of both physical and phenomenological aspects of life. Significantly Varela claims that his work is a foundation for an observer-inclusive science, which addresses some of the paradoxes of the physicist’s conception of the universe and humanity’s place in it.
y hunch is that these principles can be applied to social scientific phenomena as well, as organizations are just organisms bigger than us. This is a rather strong claim and difficult to test. However, it seems to me after years of study the necessary conclusion of available theory. It also seems consistent with recent trends in economics towards complexity and institutional economics, and the intuition that’s now rather widespread that the economy functions as a complex ecosystem.

This would be a victory for science if we could only formalize these intuitions well enough to either make these theories testable, or to be so communicable as to be recognized as ‘proved’ by any with the wherewithal to study it.

discovering agency in symbolic politics as psychic expression of Blau space

If the Blau space is exogenous to manifest society, then politics is an epiphenomenon. There will be hustlers; there will be the oscillations of who is in control. But there is no agency. Particularities are illusory, much as how in quantum field theory the whole notion of the ‘particle’ is due to our perceptual limitations.

An alternative hypothesis is that the Blau space shifts over time as a result of societal change.

Demographics surely do change over time. But this does not in itself show that Blau space shifts are endogenous to the political system. We could possibly attribute all Blau space shifts to, for example, apolitical terms of population growth and natural resource availability. This is the geographic determinism stance. (I’ve never read Guns, Germs, and Steel… I’ve heard mixed reviews.)

Detecting political agency within a complex system is bound to be difficult because it’s a lot like trying to detect free will, only with a more hierarchical ontology. Social structure may or may not be intelligent. Our individual ability to determine whether it is or not will be very limited. Any individual will have a limited set of cognitive frames with which to understand the world. Most of them will be acquired in childhood. While it’s a controversial theory, the Lakoff thesis that whether one is politically liberal or conservative depends on ones relationship with ones parents is certainly very plausible. How does one relate to authority? Parental authority is replaced by state and institutional authority. The rest follows.

None of these projects are scientific. This is why politics is so messed up. Whereas the Blau space is an objective multidimensional space of demographic variability, the political imaginary is the battleground of conscious nightmares in the symbolic sphere. Pathetic humanity, pained by cruel life, fated to be too tall, or too short, born too rich or too poor, disabled, misunderstood, or damned to mediocrity, unfurls its anguish in so many flags in parades, semaphore, and war. But what is it good for?

“Absolutely nothin’!”

I’ve written before about how I think Jung and Bourdieu are an improvement on Freud and Habermas as the basis of unifying political ideal. Whereas for Freud psychological health is the rational repression of the id so that the moralism of the superego can hold sway over society, Jung sees the spiritual value of the unconscious. All literature and mythology is an expression of emotional data. Awakening to the impersonal nature of ones emotions–as they are rooted in a collective unconscious constituted by history and culture as well as biology and individual circumstance–is necessary for healthy individuation.

So whereas Habermasian direct democracy, being Freudian through the Frankfurt School tradition, is a matter of rational consensus around norms, presumably coupled with the repression of that which does not accord with those norms, we can wonder what a democracy based on Jungian psychology would look like. It would need to acknowledge social difference within society, as Bourdieu does, and that this social difference puts constraints on democratic participation.

There’s nothing so remarkable about what I’m saying. I’m a little embarrassed to be drawing from European Grand Theorists and psychoanalysts when it would be much more appropriate for me to be looking at, say, the tradition of American political science with its thorough analysis of the role of elites and partisan democracy. But what I’m really looking for is a theory of justice, and the main way injustice seems to manifest itself now is in the resentment of different kinds of people toward each other. Some of this resentment is “populist” resentment, but I suspect that this is not really the source of strife. Rather, it’s the conflict of different kinds of elites, with their bases of power in different kinds of capital (economic, institutional, symbolic, etc.) that has macro-level impact, if politics is real at all. Political forces, which will have leaders (“elites”) simply as a matter of the statistical expression of variable available energy in the society to fill political roles, will recruit members by drawing from the psychic Blau space. As part of recruitment, the political force will activate the habitus shadow of its members, using the dark aspects of the psyche to mobilize action.

It is at this point, when power stokes the shadow through symbols, that injustice becomes psychologically real. Therefore (speaking for now only of symbolic politics, as opposed to justice in material economic actuality, which is something else entirely) a just political system is one that nurtures individuation to such an extent that its population is no longer susceptible to political mobilization.

To make this vision of democracy a bit more concrete, I think where this argument goes is that the public health system should provide art therapy services to every citizen. We won’t have a society that people feel is “fair” unless we address the psychological roots of feelings of disempowerment and injustice. And while there are certainly some causes of these feelings that are real and can be improved through better policy-making, it is the rare policy that actually improves things for everybody rather than just shifting resources around according to a new alignment of political power, thereby creating a new elite and new grudges. Instead I’m proposing that justice will require peace, and that peace is more a matter of the personal victory of the psyche than it is a matter of political victory of ones party.

Protected: on intellectual sincerity

This content is password protected. To view it please enter your password below:

the end of narrative in social science

‘Narrative’ is a term you hear a lot in the humanities, the humanities-oriented social sciences, and in journalism. There’s loads of scholarship dedicated to narrative. There’s many academic “disciplines” whose bread and butter is the telling of a good story, backed up by something like a scientific method.

Contrast this with engineering schools and professions, where the narrative is icing on the cake if anything at all. The proof of some knowledge claim is in its formal logic or operational efficacy.

In the interdisciplinary world of research around science, technology, and society, the priority of narrative is one of the major points of contention. This is similar to the tension I found I encountered in earlier work on data journalism. There are narrative and mechanistic modes of explanation. The mechanists are currently gaining in wealth and power. Narrativists struggle to maintain their social position in such a context.

A struggle I’ve had while working on my dissertation is trying to figure out how to narrate to narrativists a research process that is fundamentally formal and mechanistic. My work is “computational social science” in that it is computer science applied to the social. But in order to graduate from my department I have to write lots of words about how this ties in to a universe of academic literature that is largely by narrativists. I’ve been grounding my work in Pierre Bourdieu because I think he (correctly) identifies mathematics as the logical heart of science. He goes so far as to argue that mathematics should be at the heart of an ideal social science or sociology. My gloss on this after struggling with this material both theoretically and in practice is that narratively driven social sciences will always be politically or at least perspectivally inflected in ways that threaten the objectivity of the results. Narrativists will try to deny the objectivity of mathematical explanation, but for the most part that’s because they don’t understand the mathematical ambition. Most mathematicians will not go out of their way to correct the narrativists, so this perception of the field persists.

So I was interested to discover in the work of Miller McPherson, the sociologist who I’ve identified as the bridge between traditional sociology and computational sociology (his work gets picked up, for example, in the generative modeling of Kim and Leskovec, which is about as representative of the new industrial social science paradigm as you can get), an admonition about the consequences of his formally modeled social network formation process (the Blau space, which is very interesting). His warning is that the sociology his work encourages loses narrative and with it individual agency.


(McPherson, 2004, “A Blau space primer: prolegomenon to an ecology of affiliation”)

It’s ironic that the whole idea of a Blau space, which is that the social network of society is sampled from an underlying multidimensional space of demographic dimensions, predicts the quantitative/qualitative divide in academic methods as not just a methodological difference but a difference in social groups. The formation of ‘disciplines’ is endogenous to the greater social process and there isn’t much individual agency in this choice. This lack of agency is apparent, perhaps, to the mathematicians and a constant source of bewilderment and annoyance, perhaps, to the narrativists who will insist on the efficacy of a narratively driven ‘politics’–however much this may run counter to the brute fact of the industrial machine–because it is the position that rationalizes and is accessible from their subject position in Blau space.

“Subject position in Blau space” is basically the same idea, in more words, as the Bourdieusian habitus. So, nicely, we have a convergence between French sociological grand theory and American computational social science. As the Bourdieusian theory provides us with a serviceable philosophy of science grounded in sociological reality of science, we can breathe easily and accept the correctness of technocratic hegemony.

By “we” here I mean…ah, here’s the rub. There’s certainly a class of people who will resist this hegemony. They can be located easily in Blau space. I’ve spent years of my life now trying to engage with them, persuading them of the ideas that rule the world. But this turns out to be largely impossible. It’s demanding they cross too much distance, removes them from their local bases of institutional support and recognition, etc. The “disciplines” are what’s left in the receding tide before the next oceanic wave of the unified scientific field. Unified by a shared computational logic, that is.

What is at stake, really, is logic.

a constitution written in source code

Suppose we put aside any apocalyptic fears of instrumentality run amok, make peace between The Two Cultures of science and the humanities, and suffer gracefully the provocations of the critical without it getting us down.

We are left with some bare facts:

  • The size and therefore the complexity of society is increasing all the time.
  • Managing that complexity requires information technology, and specifically technology for computation and its various interfaces.
  • The information processing already being performed by computers in the regulation and control of society dwarfs anything any individual can accomplish.
  • While we maintain the myth of human expertise and human leadership, these are competitive only when assisted to a great degree by a thinking machine.
  • Political decisions, in particular, either are already or should be made with the assistance of data processing tools commensurate with the scale of the decisions being made.

This is a description of the present. To extrapolate into the future, there is only a thin consensus of anthropocentrism between us and the conclusion that we do not so much govern machines as much as they govern us.

This should not shock us. The infrastructure that provides us so much guidance and potential in our daily lives–railroads, electrical wires, wifi hotspots, satellites–is all of human design and made in service of human interests. While these design processes were never entirely democratic, we have made it thus far with whatever injustices have occurred.

We no longer have the pretense that making governing decisions is the special domain of the human mind. Concerns about the possibly discriminatory power of algorithms concede this point. So public concern now scrutinizes the private companies whose software systems make so many decisions for us in ways that are obscure or unpredictable. The profit motive, it is suspected, will not serve customers of these services well in the long run.

So far policy-makers have taken a passive stance towards the problem of algorithmic control by reacting to violations of human dignity with a call for human regulation.

What is needed is a more active stance.

Suppose we were to start again in founding a new city. Or a new nation. Unlike the founders of every city ever founded, we have the option to write its founding Constitution in source code. It would be logically precise and executable without expensive bureaucratic apparatus. It would be scalable in ways that can be mathematically confirmed. It could be forked, experimented with, by diverse societies across the globe. Its procedure for amendment would be written into itself, securing democracy by protocol design.