Category: politics

habitus and citizenship

Just a quick thought… So in Bourdieu’s Science of Science and Reflexivity, he describes the habitus of the scientist. Being a scientist demands a certain adherence to the rules of the scientific game, certain training, etc. He winds up constructing a sociological explanation for the epistemic authority of science. The rules of the game are the conditions for objectivity.

When I was working on a now defunct dissertation, I was comparing this formulation of science with a formulation of democracy and the way it depends on publics. Habermasian publics, Fraserian publics, you get the idea. Within this theory, what was once a robust theory of collective rationality as the basis for democracy has deteriorated under what might be broadly construed as “postmodern” critiques of this rationality. One could argue that pluralistic multiculturalism, not collective reason, became the primary ideology for American democracy in the past eight years.

Pretty sure this backfired with e.g. the Alt-Right.

So what now? I propose that those interested in functioning democracy reconsider the habitus of citizenship and how it can be maintained through the education system and other civic institutions. It’s a bit old-school. But if the Alt-Right wanted a reversion to historical authoritarian forms of Western governance, we may be getting there. Suppose history moves in a spiral. It might be best to try to move forward, not back.

post-election updates

Like a lot of people, I was completely surprised by the results of the 2016 election.

Rationally, one has to take these surprises as an opportunity to update ones point of view. As it’s been almost a month, there’s been lots of opportunity to process what’s going on.

For my own sake, more than for any reader, I’d like to note my updates here.

The first point has been best articulated by Jon Stewart:

Stewart rejected the idea that better news coverage would have changed the outcome of the election. “The idea that if [the media] had done a better job this country would have made another choice is fake,” he said. He cited Brexit as an example of an unfortunate outcome that occurred despite its lead-up being appropriately covered by outlets like the BBC, which offered a much more balanced view than CNN, for example. “Trump didn’t happen because CNN sucks—CNN just sucks,” he said.

Satire and comedy also couldn’t have stood in the way of Trump winning, Stewart said. If this election has taught us anything, he said, its that “controlling the culture does not equate to holding the power.”

I once cared a lot about “money in politics” at the level of campaign donations. After a little critical thinking, this leads naturally to a concern about the role of the media more generally in elections. Centralized media in particular will never put themselves behind a serious bid for campaign finance reform because those media institutions cash out every election. This is what it means for a problem to be “systemic”: it is caused by a tightly reinforcing feedback loop that makes it into a kind of social structural knot.

But with the 2016 presidential election, we’ve learned that Because of the Internet, media are so fragmented that even controlled media are not in control. People will read what they want to read, one way or another. Whatever narrative suits a person best, they will be able to find it on the Internet.

A perhaps unhelpful way to say this is that the Internet has set the Bourdieusian habitus free from media control.

But if the media doesn’t determine habitus, what does?

While there is a lot of consternation about the failure of polling (which is interesting), and while that could have negatively impacted Democratic campaign strategy (didn’t it?), the more insightful sounding commentary has recognized that the demographic fundamentals were in favor of Trump all along because of what he stood for economically and socially. Michael Moore predicted the election result; logically, because he was right, we should update towards his perspective; he makes essentially this point about Midwestern voters, angry men, depressed progressives, and the appeal of oddball voting all working against Hilary. But none of these conditions have as much to do with media as they do to the preexisting population conditions.

There’s a tremendous bias among those who “study the Internet” to assign tremendous political importance to the things we have expertise on: the media, algorithms, etc. My biggest update this election was that I now think that these are eclipsed in political relevance compared to macro-economic issues like globalization. At best changes to, say, the design of social media platforms are going to change things for a few people at the margins. But larger structural forces are both more effective and more consequential in politics. I bet that a prediction of the 2016 election based primarily on the demographic distribution of winners and losers according to each candidate’s energy policy, for example, would have been more valuable than all the rest of the polling and punditry combined. I suppose I was leaning this way throughout 2016, but the election sealed the deal for me.

This is a relief for me because it has revealed to me just how much of my internalization and anxieties about politics have been irrelevant. There is something very freeing in discovering that many things that you once thought were the most important issues in the world really just aren’t. If all those anxieties were proven to just be in my head, then it’s easier to let them go. Now I can start wondering about what really matters.

The FTC and pragmatism; Hoofnagle and Holmes

I’ve started working my way through Chris Hoofnagle’s Federal Trade Commission Privacy Law and Policy. Where I’m situated at the I School, there’s a lot of representation and discussion of the FTC in part because of Hoofnagle’s presence there. I find all this tremendously interesting but a bit difficult to get a grip on, as I have only peripheral experiences of actually existing governance. Instead I’m looking at things with a technical background and what can probably be described as overdeveloped political theory baggage.

So a clearly written and knowledgeable account of the history and contemporary practice of the FTC is exactly what I need to read, I figure.

With the poor judgment of commenting on the book having just cracked it open, I can say that the book reads so far as, not surprisingly, a favorable account of the FTC and its role in privacy law. In broad strokes, I’d say Hoofnagle’s narrative is that while the FTC started out as a compromise between politicians with many different positions on trade regulation, and while its had at times “mediocre” leadership, now the FTC is run by selfless, competent experts with the appropriate balance of economic savvy and empathy for consumers.

I can’t say I have any reason to disagree. I’m not reading for either a critique or an endorsement of the agency. I’m reading with my own idiosyncratic interests in mind: algorithmic law and pragmatist legal theory, and the relationship between intellectual property and antitrust. I’m also learning (through reading) how involved the FTC has been in regulating advertising, which endears me to the adjacency because I find most advertising annoying.

Missing as I am any substantial knowledge of 20th century legal history, I’m intrigued by resonances between Hoofnagle’s account of the FTC and Oliver Wendell Holmes Jr.’s “The Path of the Law“, which I mentioned earlier. Apparently there’s some tension around the FTC as some critics would like to limit its powers by holding it more narrowly accountable to common law, as oppose to (if I’m getting this right) a more broadly scoped administrative law that, among other things, allows it to employ skilled economist and technologists. As somebody who has been intellectually very informed by American pragmatism, I’m pleased to notice that Holmes himself would have probably approved of the current state of the FTC:

At present, in very many cases, if we want to know why a rule of law has taken its particular shape, and more or less if we want to know why it exists at all, we go to tradition. We follow it into the Year Books, and perhaps beyond them to the customs of the Salian Franks, and somewhere in the past, in the German forests, in the needs of Norman kings, in the assumptions of a dominant class, in the absence of generalized ideas, we find out the practical motive for what now best is justified by the mere fact of its acceptance and that men are accustomed to it. The rational study of law is still to a large extent the study of history. History must be a part of the study, because without it we cannot know the precise scope of rules which it is our business to know. It is a part of the rational study, because it is the first step toward an enlightened scepticism, that is, towards a deliberate reconsideration of the worth of those rules. When you get the dragon out of his cave on to the plain and in the daylight, you can count his teeth and claws, and see just what is his strength. But to get him out is only the first step. The next is either to kill him, or to tame him and make him a useful animal. For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics. It is revolting to have no better reason for a rule of law than that so it was laid down in the time of Henry IV. It is still more revolting if the grounds upon which it was laid down have vanished long since, and the rule simply persists from blind imitation of the past. (Holmes, 1897)

These are strong words from a Supreme Court justice about the limitations of common law! It’s also a wholehearted endorsement of quantified science as the basis for legal rules. Perhaps what Holmes would have preferred is a world in which statistics and economics themselves became part of the logic of law. However, he goes to pains to point out how often legal judgment itself does not depend on logic so much as the unconscious biases of judges and juries, especially with respect to questions of “social advantage”:

I think that the judges themselves have failed adequately to recognize their duty of weighing considerations of social advantage. The duty is inevitable, and the result of the often proclaimed judicial aversion to deal with such considerations is simply to leave the very ground and foundation of judgments inarticulate, and often unconscious, as I have said. When socialism first began to be talked about, the comfortable classes of the community were a good deal frightened. I suspect that this fear has influenced judicial action both here and in England, yet it is certain that it is not a conscious factor in the decisions to which I refer. I think that something similar has led people who no longer hope to control the legislatures to look to the courts as expounders of the constitutions, and that in some courts new principles have been discovered outside the bodies of those instruments, which may be generalized into acceptance of the economic doctrines which prevailed about fifty years ago, and a wholesale prohibition of what a tribunal of lawyers does not think about right. I cannot but believe that if the training of lawyers led them habitually to consider more definitely and explicitly the social advantage on which the rule they lay down must be justified, they sometimes would hesitate where now they are confident, and see that really they were taking sides upon debatable and often burning questions.

What I find interesting about this essay is that it somehow endorses both the use of economics and statistics in advancing legal thinking and also endorses what has become critical legal theory, with its specific consciousness of the role of social power relations in law. So often in contemporary academic discourse, especially when it comes to discussion of regulation technology businesses, these approaches to law are considered opposed. Perhaps it’s appropriate to call a more politically centered position, if there were one today, a pragmatist position.

Perhaps quixotically, I’m very interested in the limits of these arguments and their foundation in legal scholarship because I’m wondering to what extent computational logic can become a first class legal logic. Holmes’s essay is very concerned with the limitations of legal logic:

The fallacy to which I refer is the notion that the only force at work in the development of the law is logic. In the broadest sense, indeed, that notion would be true. The postulate on which we think about the universe is that there is a fixed quantitative relation between every phenomenon and its antecedents and consequents. If there is such a thing as a phenomenon without these fixed quantitative relations, it is a miracle. It is outside the law of cause and effect, and as such transcends our power of thought, or at least is something to or from which we cannot reason. The condition of our thinking about the universe is that it is capable of being thought about rationally, or, in other words, that every part of it is effect and cause in the same sense in which those parts are with which we are most familiar. So in the broadest sense it is true that the law is a logical development, like everything else. The danger of which I speak is not the admission that the principles governing other phenomena also govern the law, but the notion that a given system, ours, for instance, can be worked out like mathematics from some general axioms of conduct. This is the natural error of the schools, but it is not confined to them. I once heard a very eminent judge say that he never let a decision go until he was absolutely sure that it was right. So judicial dissent often is blamed, as if it meant simply that one side or the other were not doing their sums right, and if they would take more trouble, agreement inevitably would come.

This mode of thinking is entirely natural. The training of lawyers is a training in logic. The processes of analogy, discrimination, and deduction are those in which they are most at home. The language of judicial decision is mainly the language of logic. And the logical method and form flatter that longing for certainty and for repose which is in every human mind. But certainty generally is illusion, and repose is not the destiny of man. Behind the logical form lies a judgment as to the relative worth and importance of competing legislative grounds, often an inarticulate and unconscious judgment, it is true, and yet the very root and nerve of the whole proceeding. You can give any conclusion a logical form. You always can imply a condition in a contract. But why do you imply it? It is because of some belief as to the practice of the community or of a class, or because of some opinion as to policy, or, in short, because of some attitude of yours upon a matter not capable of exact quantitative measurement, and therefore not capable of founding exact logical conclusions. Such matters really are battle grounds where the means do not exist for the determinations that shall be good for all time, and where the decision can do no more than embody the preference of a given body in a given time and place. We do not realize how large a part of our law is open to reconsideration upon a slight change in the habit of the public mind. No concrete proposition is self evident, no matter how ready we may be to accept it, not even Mr. Herbert Spencer’s “Every man has a right to do what he wills, provided he interferes not with a like right on the part of his neighbors.”

For Holmes, nature can be understood through a mathematized physics and is in this sense logical. But the law itself is not logical in the narrow sense of providing certainty about concrete propositions and the legal interpretation of events.

I wonder whether the development of more flexible probabilistic logics, such as those that inform contemporary machine learning techniques, would have for Holmes adequately bridged the gap between the logic of nature and the ambiguity of law. These probabilistic logics are designed to allow for precise quantification of uncertainty and ambiguity.

This is not a purely academic question. I’m thinking concretely about applications to regulation. Some of this has already been implemented. I’m thinking about Datta, Tschantz, and Datta’s “Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination” (pdf). I know several other discrimination auditing tools have been developed by computer science researchers. What is the legal status of these tools? Could they or should they be implemented as a scalable or real-time autonomous system?

I was talking to an engineer friend the other day and he was telling me that internally to Google, there’s a team responsible for building the automated system that tests all of its other automated systems to make sure that it is adherence to its own internal privacy standards. This was a comforting thing to hear and not a surprise, as I get the sense from conversations I’ve had with Googler’s that they are in general a very ethically conscientious company. What’s distressing to me is that Google may have more powerful techniques available for self-monitoring than the government has for regulation. This is because (I think…again my knowledge of these matters is actually quite limited) at Google they know when a well-engineered computing system is going to perform better than a team of clerks, and so developing this sort of system is considered worthy of investment. It will be internally trusted as much as any other internal expertise. Whereas in the court system, institutional inertia and dependency on discursive law mean that at best this sort of system can be brought in as an expensive and not entirely trusted external source.

What I’d like to figure out is to what extent agency law in particular is flexible enough to be extended to algorithmic law.

algorithmic law and pragmatist legal theory: Oliver Wendell Holmes Jr. “The Path of the Law”

Several months ago I was taken by the idea that in the future (and maybe depending on how you think about it, already in the present) laws should be written as computer algorithms. While the idea that “code is law” and that technology regulates is by no means original, what I thought perhaps provocative is the positive case for the (re-)implementation of the fundamental laws of the city or state in software code.

The argument went roughly like this:

  • Effective law must control a complex society
  • Effective control requires social and political prediciton.
  • Unassisted humans are not good at social and political prediction. For this conclusion I drew heavily on Philip Tetlock’s work in Expert Political Judgment.
  • Therefore laws, in order to keep pace with the complexity of society, should be implemented as technical systems capable of bringing data and machine learning to bear on social control.

Science fiction is full of both dystopias and utopias in which society is literally controlled by a giant, intelligent machine. Avoiding either extreme, I just want to make the modest point that there may be scalability problems with law and regulation based on discourse in natural language. To some extent the failure of the state to provide sophisticated, personalized regulation in society has created myriad opportunities for businesses to fill these roles. Now there’s anxiety about the relationship between these businesses and the state as they compete for social regulation. To the extent that businesses are less legitimate rulers of society than the state, it seems a practical, technical necessity that the state adopt the same efficient technologies for regulation that businesses have. To do otherwise is to become obsolete.

There are lots of reasons to object to this position. I’m interested in hearing yours and hope you will comment on this and future blog posts or otherwise contact me with your considered thoughts on the matter. To me the strongest objection is that the whole point of the law is that it is based on precedent, and so any claim about the future trajectory of the law has to be based on past thinking about the law. Since I am not a lawyer and I know precious little about the law, you shouldn’t listen to my argument because I don’t know what I’m talking about. Q.E.D.

My counterargument to this is that there’s lots of academics who opine about things they don’t have particular expertise in. One way to get away with this is by deferring to somebody else who has credibility in field of interest. This is just one of several reasons why I’ve been reading “The Path of the Law“, a classic essay about pragmatist legal theory written by Supreme Court Justice Oliver Wendell Holmes Jr. in 1897.

One of the key points of this essay is that it is a mistake to consider the study of law the study of morality per se. Rather, the study of law is the attempt to predict the decisions that courts will make in the future, based on the decisions courts will make in the past. What courts actually decide is based in part of legal precedent but also on the unconsciously inclinations of judges and juries. In ambiguous cases, different legal framings of the same facts will be in competition, and the judgment will give weight to one interpretation or another. Perhaps the judge will attempt to reconcile these differences into a single, logically consistent code.

I’d like to take up the arguments of this essay again in later blog posts, but for now I want to focus on the concept of legal study as prediction. I think this demands focus because while Holmes, like most American pragmatists, had a thorough and nuanced understanding of what prediction is, our mathematical understanding of prediction has come a long way since 1897. Indeed, it is a direct consequence of these formalizations and implementations of predictive systems that we today see so much tacit social regulation performed by algorithms. We know now that effective prediction depends on access to data and the computational power to process it according to well-known algorithms. These algorithms can optimize themselves to such a degree that their specific operations are seemingly beyond the comprehension of the people affected by them. Some lawyers have argued that this complexity should not be allowed to exist.

What I am pointing to is a fundamental tension between the requirement that practitioners of the law be able to predict legal outcomes, and the fact that the logic of the most powerful predictive engines today is written in software code not words. This is because of physical properties of computation and prediction that are not likely to ever change. And since a powerful predictive engine can just as easily use its power to be strategically unpredictable, this presents an existential challenge to the law. It may simply be impossible for lawyers acting as human lawyers have for hundreds of years to effectively predict and therefor regulate powerful computational systems.

One could argue that this means that such powerful computational systems should simply be outlawed. Indeed this is the thrust of certain lawyers. But if we believe that these systems are not going to go away, perhaps because they won’t allow us to regulate them out of existence, then our only viable alternative to suffering under their lawless control is to develop a competing system of computational legalism with the legitimacy of the state.

discovering agency in symbolic politics as psychic expression of Blau space

If the Blau space is exogenous to manifest society, then politics is an epiphenomenon. There will be hustlers; there will be the oscillations of who is in control. But there is no agency. Particularities are illusory, much as how in quantum field theory the whole notion of the ‘particle’ is due to our perceptual limitations.

An alternative hypothesis is that the Blau space shifts over time as a result of societal change.

Demographics surely do change over time. But this does not in itself show that Blau space shifts are endogenous to the political system. We could possibly attribute all Blau space shifts to, for example, apolitical terms of population growth and natural resource availability. This is the geographic determinism stance. (I’ve never read Guns, Germs, and Steel… I’ve heard mixed reviews.)

Detecting political agency within a complex system is bound to be difficult because it’s a lot like trying to detect free will, only with a more hierarchical ontology. Social structure may or may not be intelligent. Our individual ability to determine whether it is or not will be very limited. Any individual will have a limited set of cognitive frames with which to understand the world. Most of them will be acquired in childhood. While it’s a controversial theory, the Lakoff thesis that whether one is politically liberal or conservative depends on ones relationship with ones parents is certainly very plausible. How does one relate to authority? Parental authority is replaced by state and institutional authority. The rest follows.

None of these projects are scientific. This is why politics is so messed up. Whereas the Blau space is an objective multidimensional space of demographic variability, the political imaginary is the battleground of conscious nightmares in the symbolic sphere. Pathetic humanity, pained by cruel life, fated to be too tall, or too short, born too rich or too poor, disabled, misunderstood, or damned to mediocrity, unfurls its anguish in so many flags in parades, semaphore, and war. But what is it good for?

“Absolutely nothin’!”

I’ve written before about how I think Jung and Bourdieu are an improvement on Freud and Habermas as the basis of unifying political ideal. Whereas for Freud psychological health is the rational repression of the id so that the moralism of the superego can hold sway over society, Jung sees the spiritual value of the unconscious. All literature and mythology is an expression of emotional data. Awakening to the impersonal nature of ones emotions–as they are rooted in a collective unconscious constituted by history and culture as well as biology and individual circumstance–is necessary for healthy individuation.

So whereas Habermasian direct democracy, being Freudian through the Frankfurt School tradition, is a matter of rational consensus around norms, presumably coupled with the repression of that which does not accord with those norms, we can wonder what a democracy based on Jungian psychology would look like. It would need to acknowledge social difference within society, as Bourdieu does, and that this social difference puts constraints on democratic participation.

There’s nothing so remarkable about what I’m saying. I’m a little embarrassed to be drawing from European Grand Theorists and psychoanalysts when it would be much more appropriate for me to be looking at, say, the tradition of American political science with its thorough analysis of the role of elites and partisan democracy. But what I’m really looking for is a theory of justice, and the main way injustice seems to manifest itself now is in the resentment of different kinds of people toward each other. Some of this resentment is “populist” resentment, but I suspect that this is not really the source of strife. Rather, it’s the conflict of different kinds of elites, with their bases of power in different kinds of capital (economic, institutional, symbolic, etc.) that has macro-level impact, if politics is real at all. Political forces, which will have leaders (“elites”) simply as a matter of the statistical expression of variable available energy in the society to fill political roles, will recruit members by drawing from the psychic Blau space. As part of recruitment, the political force will activate the habitus shadow of its members, using the dark aspects of the psyche to mobilize action.

It is at this point, when power stokes the shadow through symbols, that injustice becomes psychologically real. Therefore (speaking for now only of symbolic politics, as opposed to justice in material economic actuality, which is something else entirely) a just political system is one that nurtures individuation to such an extent that its population is no longer susceptible to political mobilization.

To make this vision of democracy a bit more concrete, I think where this argument goes is that the public health system should provide art therapy services to every citizen. We won’t have a society that people feel is “fair” unless we address the psychological roots of feelings of disempowerment and injustice. And while there are certainly some causes of these feelings that are real and can be improved through better policy-making, it is the rare policy that actually improves things for everybody rather than just shifting resources around according to a new alignment of political power, thereby creating a new elite and new grudges. Instead I’m proposing that justice will require peace, and that peace is more a matter of the personal victory of the psyche than it is a matter of political victory of ones party.

a constitution written in source code

Suppose we put aside any apocalyptic fears of instrumentality run amok, make peace between The Two Cultures of science and the humanities, and suffer gracefully the provocations of the critical without it getting us down.

We are left with some bare facts:

  • The size and therefore the complexity of society is increasing all the time.
  • Managing that complexity requires information technology, and specifically technology for computation and its various interfaces.
  • The information processing already being performed by computers in the regulation and control of society dwarfs anything any individual can accomplish.
  • While we maintain the myth of human expertise and human leadership, these are competitive only when assisted to a great degree by a thinking machine.
  • Political decisions, in particular, either are already or should be made with the assistance of data processing tools commensurate with the scale of the decisions being made.

This is a description of the present. To extrapolate into the future, there is only a thin consensus of anthropocentrism between us and the conclusion that we do not so much govern machines as much as they govern us.

This should not shock us. The infrastructure that provides us so much guidance and potential in our daily lives–railroads, electrical wires, wifi hotspots, satellites–is all of human design and made in service of human interests. While these design processes were never entirely democratic, we have made it thus far with whatever injustices have occurred.

We no longer have the pretense that making governing decisions is the special domain of the human mind. Concerns about the possibly discriminatory power of algorithms concede this point. So public concern now scrutinizes the private companies whose software systems make so many decisions for us in ways that are obscure or unpredictable. The profit motive, it is suspected, will not serve customers of these services well in the long run.

So far policy-makers have taken a passive stance towards the problem of algorithmic control by reacting to violations of human dignity with a call for human regulation.

What is needed is a more active stance.

Suppose we were to start again in founding a new city. Or a new nation. Unlike the founders of every city ever founded, we have the option to write its founding Constitution in source code. It would be logically precise and executable without expensive bureaucratic apparatus. It would be scalable in ways that can be mathematically confirmed. It could be forked, experimented with, by diverse societies across the globe. Its procedure for amendment would be written into itself, securing democracy by protocol design.

Bourdieu and Horkheimer; towards an economy of control

It occurred to me as I looked over my earliest notes on Horkheimer (almost a year ago!) that Bourdieu’s concept of science as being a social field that formalizes and automates knowledge is Horkheimer’s idea of hell.

The danger Horkheimer (and so many others) saw in capitalist, instrumentalized, scientific society was that it would alienate and overwhelm the individual.

It is possible that society would alienate the individual anyway, though. For example, in the household of antiquity, were slaves unalienated? The privilege of autonomy is one that has always been rare but disproportionately articulated as normal, even a right. In a sense Western Democracies and Republics exist to guarantee autonomy to their citizens. In late modern democracies, autonomy is variable depending on role in society, which is tied to (economic, social, symbolic, etc.) capital.

So maybe the horror of Horkheimer, alienated by scientific advance, is the horror of one whose capital was being devalued by science. His scholarship, his erudition, were isolated and deemed irrelevant by the formal reasoners who had come to power.

As I write this, I am painfully aware that I have spent a lot of time in graduate school reading books and writing about them when I could have been practicing programming and learning more mathematics. My aspirations are to be a scientist, and I am well aware that that requires one to mathematically formalize ones findings–or, equivalently, to program them into a computer. (It goes without saying that computer programming is formalism, is automation, and so its central role in contemporary science or ‘data science’ is almost given to it by definition. It could not have been otherwise.)

Somehow I have been provoked into investing myself in a weaker form of capital, the benefit of which is the understanding that I write here, now.

Theoretically, the point of doing all this work is to be able to identify a societal value and formalize it so that it can be capture in a technical design. Perhaps autonomy is this value. Another might call it freedom. So once again I am reminded of Simone de Beauvoir’s philosophy of science, which has been correct all along.

But perhaps de Beauvoir was naive about the political implications of technology. Science discloses possibilities, the opportunities are distributed unequally because science is socially situated. Inequality leads to more alienation, not less, for all but the scientists. Meanwhile autonomy is not universally valued–some would prefer the comforts of society, of family structure. If free from society, they would choose to reenter it. Much of ones preferences must come from habitus, no?

I am indeed reaching the limits of my ability to consider the problem discursively. The field is too multidimensional, too dynamic. The proper next step is computer simulation.

trust issues and the order of law and technology cf @FrankPasquale

I’ve cut to the last chapter of Pasquale’s The Black Box Society, “Towards an Intelligible Society.” I’m interested in where the argument goes. I see now that I’ve gotten through it that the penultimate chapter has Pasquale’s specific policy recommendations. But as I’m not just reading for policy and framing but also for tone and underlying theoretical commitments, I think it’s worth recording some first impressions before doubling back.

These are some points Pasquale makes in the concluding chapter that I wholeheartedly agree with:

  • A universal basic income would allow more people to engage in high risk activities such as the arts and entrepreneurship and more generally would be great for most people.
  • There should be publicly funded options for finance, search, and information services. A great way to provide these would be to fund the development of open source algorithms for finance and search. I’ve been into this idea for so long and it’s great to see a prominent scholar like Pasquale come to its defense.
  • Regulatory capture (or, as he elaborates following Charles Lindblom, “regulatory circularity”) is a problem. Revolving door participation in government and business makes government regulation an unreliable protector of the public interest.

There is quite a bit in the conclusion about the specifics of regulation the finance industry. There is an impressive amount of knowledge presented about this and I’ll admit much of it is over my head. I’ll probably have a better sense of it if I get to reading the chapter that is specifically about finance.

There are some things that I found bewildering or off-putting.

For example, there is a section on “Restoring Trust” that talks about how an important problem is that we don’t have enough trust in the reputation and search industries. His solution is to increase the penalties that the FTC and FCC can impose on Google and Facebook for its e.g. privacy violations. The current penalties are too trivial to be effective deterrence. But, Pasquale argues,

It is a broken enforcement model, and we have black boxes to thank for much of this. People can’t be outraged by what they can’t understand. And without some public concern about the trivial level of penalties for lawbreaking here, there are no consequences for the politicians ultimately responsible for them.

The logic here is a little mad. Pasquale is saying that people are not outraged enough by search and reputation companies to demand harsher penalties, and this is a problem because people don’t trust these companies enough. The solution is to convince people to trust these companies less–get outraged by them–in order to get them to punish the companies more.

This is a bit troubling, but makes sense based on Pasquale’s theory of regulatory circularity, which turns politics into a tug-of-war between interests:

The dynamic of circularity teaches us that there is no stable static equilibrium to be achieved between regulators and regulated. The government is either pushing industry to realize some public values in its activities (say, by respecting privacy or investing in sustainable growth), or industry is pushing regulators to promote its own interests.

There’s a simplicity to this that I distrust. It suggests for one that there are no public pressures on industry besides the government such as consumer’s buying power. A lot of Pasquale’s arguments depend on the monopolistic power of certain tech giants. But while network effects are strong, it’s not clear whether this is such a problem that consumers have no market buy in. In many cases tech giants compete with each other even when it looks like they aren’t. For example, many many people have both Facebook and Gmail accounts. Since there is somewhat redundant functionality in both, consumers can rather seemlessly allocate their time, which is tied to advertising revenue, according to which service they feel better serves them, or which is best reputationally. So social media (which is a bit like a combination of a search and reputation service) is not a monopoly. Similarly, if people have multiple search options available to them because, say, the have both Siri on their smart phone and can search Google directly, then that provides an alternative search market.

Meanwhile, government officials are also often self-interested. If there is a road to hell for industry that is to provide free web services to people to attain massive scale, then abuse economic lock-in to extract value from customers, then lobby for further rent-seeking, there is a similar road to hell in government. It starts with populist demagoguery, leads to stable government appointment, and then leverages that power for rents in status.

So, power is power. Everybody tries to get power. The question is what you do once you get it, right?

Perhaps I’m reading between the lines too much. Of course, my evaluation of the book should depend most on the concrete policy recommendations which I haven’t gotten to yet. But I find it unfortunate that what seems to be a lot of perfectly sound history and policy analysis is wrapped in a politics of professional identity that I find very counterproductive. The last paragraph of the book is:

Black box services are often wondrous to behold, but our black-box society has become dangerously unstable, unfair, and unproductive. Neither New York quants nor California engineers can deliver a sound economy or a secure society. Those are the tasks of a citizenry, which can perform its job only as well as it understands the stakes.

Implicitly, New York quants and California engineers are not citizens, to Pasquale, a law professor based in Maryland. Do all real citizens live around Washington, DC? Are they all lawyers? If the government were to start providing public information services, either by hosting them themselves or by funding open source alternatives, would he want everyone designing these open algorithms (who would be quants or engineers, I presume) to move to DC? Do citizens really need to understand the stakes in order to get this to happen? When have citizens, en masse, understood anything, really?

Based on what I’ve read so far, The Black Box Society is an expression of a lack of trust in the social and economic power associated with quantification and computing that took off in the past few dot-com booms. Since expressions of lack of trust for these industries is nothing new, one might wonder (under the influence of Foucault) how the quantified order and the critique of the quantified order manage to coexist and recreate a system of discipline that includes both and maintains its power as a complex of superficially agonistic forces. I give sincere credit to Pasquale for advocating both series income redistribution and public investment in open technology as ways of disrupting that order. But when he falls into the trap of engendering partisan distrust, he loses my confidence.

organizational secrecy and personal privacy as false dichotomy cf @FrankPasquale

I’ve turned from page 2 to page 3 of The Black Box Society (I can be a slow reader). Pasquale sets up the dichotomy around which the drama of the hinges like so:

But while powerful businesses, financial institutions, and government agencies hide their actions behind nondisclosure agreements, “proprietary methods”, and gag rules, our own lives are increasingly open books. Everything we do online is recorded; the only questions lft are to whom the data will be available, and for how long. Anonymizing software may shield us for a little while, but who knows whether trying to hide isn’t the ultimate red flag for watchful authorities? Surveillance cameras, data brokers, sensor networks, and “supercookies” record how fast we drive, what pills we take, what books we read, what websites we visit. The law, so aggressively protective of secrecy in the world of commerce, is increasingly silent when it comes to the privacy of persons.

That incongruity is the focus of this book.

This is a rhetorically powerful paragraph and it captures a lot of trepidation people have about the power of larger organization relative to themselves.

I have been inclined to agree with this perspective for a lot of my life. I used to be the kind of person who thought Everything Should Be Open. Since then, I’ve developed what I think is a more nuanced view of transparency: some secrecy is necessary. It can be especially necessary for powerful organizations and people.


Well, it depends on the physical properties of information. (Here is an example of how a proper understanding of the mechanics of information can support the transcendent project as opposed to a merely critical project).

Any time you interact with something or somebody else in a meaningful way, you affect the state of each other in probabilistic space. That means there has been some kind of flow of information. If an organization interacts with a lot of people, it is going to absorb information about a lot of people. Recording this information as ‘data’ is something that has been done for a long time because that is what allows organizations to do intelligent things vis a vis the people they interact with. So businesses, financial institutions, and governments recording information about people is nothing new.

Pasquale suggests that this recording is a threat to our privacy, and that the secrecy of the organizations that do the recording gives them power over us. But this is surely a false dichotomy. Why? Because if an organization records information about a lot of people, and then doesn’t maintain some kind of secrecy, then that information is no longer private! To, like, everybody else. In other words, maintaining secrecy is one way of ensuring confidentiality, which is surely an important part of privacy.

I wonder what happens if we continue to read The Black Box society with this link between secrecy, confidentiality, and privacy in mind.

Is the opacity of governance natural? cf @FrankPasquale

I’ve begun reading Frank Pasquale’s The Black Box Society on the recommendation that it’s a good place to start if I’m looking to focus a defense of the role of algorithms in governance.

I’ve barely started and already found lots of juicy material. For example:

Gaps in knowledge, putative and real, have powerful implications, as do the uses that are made of them. Alan Greenspan, once the most powerful central banker in the world, claimed that today’s markets are driven by an “unredeemably opaque” version of Adam Smith’s “invisible hand,” and that no one (including regulators) can ever get “more than a glimpse at the internal workings of the simplest of modern financial systems.” If this is true, libertarian policy would seem to be the only reasonable response. Friedrich von Hayek, a preeminent theorist of laissez-faire, called the “knowledge problem” an insuperable barrier to benevolent government intervention in the economy.

But what if the “knowledge problem” is not an intrinsic aspect of the market, but rather is deliberately encouraged by certain businesses? What if financiers keep their doings opaque on purpose, precisely to avoid and confound regulation? That would imply something very different about the merits of deregulation.

The challenge of the “knowledge problem” is just one example of a general truth: What we do and don’t know about the social (as opposed to the natural) world is not inherent in its nature, but is itself a function of social constructs. Much of what we can find out about companies, governments, or even one another, is governed by law. Laws of privacy, trade secrecy, the so-called Freedom of Information Act–all set limits to inquiry. They rule certain investigations out of the question before they can even begin. We need to ask: To whose benefit?

There are a lot of ideas here. Trying to break them down:

  1. Markets are opaque.
  2. If markets are naturally opaque, that is a reason for libertarian policy.
  3. If markets are not naturally opaque, then they are opaque on purpose, then that’s a reason to regulate in favor of transparency.
  4. As a general social truth, the social world is not naturally opaque but rather opaque or transparent because of social constructs such as law.

We are meant to conclude that markets should be regulated for transparency.

The most interesting claim to me is what I’ve listed as the fourth one, as it conveys a worldview that is both disputable and which carries with it the professional biases we would expect of the author, a Professor of Law. While there are certainly many respects in which this claim is true, I don’t yet believe it has the force necessary to carry the whole logic of this argument. I will be particularly attentive to this point as I read on.

The danger I’m on the lookout for is one where the complexity of the integration of society, which following Beniger I believe to be a natural phenomenon, is treated as a politically motivated social construct and therefore something that should be changed. It is really only the part after the “and therefore” which I’m contesting. It is possible for politically motivated social constructs to be natural phenomena. All institutions have winners and losers relative to their power. Who would a change in policy towards transparency in the market benefit? If opacity is natural, it would shift the opacity to some other part of society, empowering a different group of people. (Possibly lawyers).

If opacity is necessary, then perhaps we could read The Black Box Society as an expression of the general problem of alienation. It is way premature for me to attribute this motivation to Pasquale, but it is a guiding hypothesis that I will bring with me as I read the book.