Digifesto

Herbert Simon and the missing science of interagency

Few have ever written about the transformation of organizations by information technology with the clarity of Herbert Simon. Simon worked at a time when disciplines were being reconstructed and a shift was taking place. Older models of economic actors as profit maximizing agents able to find their optimal action were giving way as both practical experience and the exact sciences told a different story.

The rationality employed by firms today is not the capacity to choose the best action–what Simon calls substantive rationality. It is the capacity to engage in steps to discover better ways of acting–procedural rationality.

So we proceed step by step from the simple caricature of the firm depicted in textbooks to the complexities of real firms in the real world of business. At each step towards realism, the problem gradually changes from choosing the right course of action (substantive rationality) to finding way of calculating, very approximately, where a good course of action lies (procedural rationality). With this shift, the theory of the firm becomes a theory of estimation under uncertainty and a theory of computation.

Simon goes on to briefly describe the fields that he believes are poised to drive the strategic behavior of firms. These are Operations Research (OR) and artificial intelligence (AI). The goal of both these fields is to translate problems into mathematical specifications that can be executed by computers. There is some variation within these fields as to whether they aim at satisficing solutions or perfect answers to combinatorial problems, but for the purposes to this article they are the same–certainly the fields have cross-pollinated much since 1969.

Simon’s analysis was prescient. The impact of OR and AI on organizations simply can’t be understated. My purpose in writing this is to point to the still unsolved analytical problems of this paradigm. Simon notes that the computational techniques he refers to percolate only so far up the corporate ladder.

OR and AI have been applied mainly to business decisions at the middle levels of management. A vast range of top management decisions (e..g. strategic decisions about investment, R&D, specialization and diversification, recruitment, development, and retention of managerial talent) are still mostly handled traditionally, that is, by experienced executives’ exercise of judgment.

Simon’s proposal for how to make these kinds of decisions more scientific is the paradigm of “expert systems”, which did not, as far as I know, take off. However, these were early days, and indeed at large firms AI techniques are used to make these kinds of executive decisions. Though perhaps equally, executives defend their own prerogative for human judgment, for better or for worse.

The unsolved scientific problem that I find very motivating is based on a subtle divergence of how the intellectual fields have proceeded. Surely economic value and consequences of business activities are wrapped up not in the behavior of an individual firm, but of many firms. Even a single firm contains many agents. While in the past the need for mathematical tractability led to assumptions of perfect rationality for these agents, we are now far past that and “the theory of the firm becomes a theory of estimation under uncertainty and a theory of computation.” But the theory of decision-making under uncertainty and the theory of computation are largely poised to address problems of the solving a single agent’s specific task. The OR or AI system fulfills a specific function of middle management; it does not, by and large, oversee the interactions between departments, and so on. The complexity of what is widely called “politics” is not captured yet within the paradigms of AI, though anybody with an ounce of practical experience would note that politics is part of almost any organizational life.

How can these kinds of problems be addressed scientifically? What’s needed is a formal, computational framework for modeling the interaction of heterogeneous agents, and a systematic method of comparing the validity of these models. Interagential activity is necessarily quite complex; this is complexity that does not fit well into any available machine learning paradigm.

References

Simon, H. A. (1969). The sciences of the artificial. Cambridge, MA.

“Private Companies and Scholarly Infrastructure”

I’m proud to link to this blog post on the Cornell Tech Digital Life Initiative blog by Jake Goldenfein, Daniel Griffin, and Eran Toch, and myself.

The academic funding scandals plaguing 2019 have highlighted some of the more problematic dynamics between tech industry money and academia (see e.g. Williams 2019, Orlowski 2017). But the tech industry’s deeper impacts on academia and knowledge production actually stem from the entirely non-scandalous relationships between technology firms and academic institutions. Industry support heavily subsidizes academic work. That support comes in the form of direct funding for departments, centers, scholars, and events, but also through the provision of academic infrastructures like communications platforms, computational resources, and research tools. In light of the reality that infrastructures are themselves political, it is imperative to unpack the political dimensions of scholarly infrastructures provided by big technology firms, and question whether they might problematically impact knowledge production and the academic field more broadly.

Goldenfein, Benthall, Griffin, and Toch, “Private Companies and Scholarly Infrastructure – Google Scholar and Academic Autonomy”, 2019

Among other topics, the post is about how the reorientation of academia onto commercial platforms possibly threatens the autonomy that is a necessary condition of the objectivity of science (Bourdieu, 2004).

This is perhaps a cheeky argument. Questioning whether Big Tech companies have an undue influence on academic work is not a popular move because so much great academic work is funded by Big Tech companies.

On the other hand, calling into question the ethics of Big Tech companies is now so mainstream that it is actively debated in the Democratic 2020 primary by front-running candidates. So we are well within the Overton window here.

On a philosophical level (which is not the primary orientation of the joint work), I wonder how much these concerns are about the relationship between capitalist modes of production and ideology with academic scholarship in general, and how much this specific manifestation (Google Scholar’s becoming the site of a disciplinary collapse (Benthall, 2015) in scholarly metrics is significant. Like many contemporary problems in society and technology, the “problem” may be that a technical intervention that might have at one point seemed like a desirable intervention by challengers (in the Fligstein (1997) field theory sense) is now having the political impact that is questioned and resisted by incumbents. I.e., while there has always been a critique of the system, the system has changed and so the critique comes from a different social source.

References

Benthall, S. (2015). Designing networked publics for communicative action. Interface, 1(1), 3.

Bourdieu, Pierre. Science of science and reflexivity. Polity, 2004.

Fligstein, Neil. “Social skill and institutional theory.” American behavioral scientist 40.4 (1997): 397-405.

Orlowski, A. (2017). Academics “funded” by Google tend not to mention it in their work. The Register, 13 July 2017.

Williams, O. (2019). How Big Tech funds the debate on AI Ethics. New Statesman America, 6 June 2019 < https://www.newstatesman.com/science-tech/technology/2019/06/how-big-tech-funds-debate-ai-ethics>.

Ashby’s Law and AI control

I’ve recently discovered Ashby’s Law, also know as the First Law of Cybernetics, by reading Stafford Beer’s “Designing Freedom” lectures. Ashby’s Law is a powerful idea, one I’ve been grasping at intuitively for some time. For example, here I was looking for something like it and thought I could get it from the Data Processing Inequality in information theory. I have not yet grokked the mathematical definition of Ashby’s Law, which I gather is in Ross Ashby’s An Introduction to Cybernetics. Though I am not sure yet, I expect the formulation there can use an update. But if I am right about its main claims, I think the argument of this post will stand.

Ashby’s Law is framed in terms of ‘variety’, which is the number of states that it is possible for a system to be in. A six-sided die has six possible states (if you’re just looking at the top of it). A laptop has many more. A brain has many more even than that. A complex organization with many people in it, all with laptops, has even more. And so on.

The law can be stated in many ways. One of them is that:

When the variety or complexity of the environment exceeds the capacity of a system (natural or artificial) the environment will dominate and ultimately destroy that system.

The law is about the relationship between a system and its environment. Or, in another sense, it is about a system to be controlled and a different system that tries to control that system. The claim is that the control unit needs to have at least as much variety as the system to be controlled for it to be effective.

This reminds me of an argument I had with a superintelligence theorist back when I was thinking about such things. The Superintelligence people, recall, worry about an AI getting the ability to improve itself recursively and causing an “intelligence explosion”. Its own intelligence, so to speak, explodes, surpassing all other intelligent life and giving it total domination over the fate of humanity.

Here is the argument that I posed a few years ago, reframed in terms of Ashby’s Law:

  • The AI in question is a control unit, C, and the world it would control is the system, S.
  • For the AI to have effective domination over S, C would need at least as much variety as S.
  • But S includes C within it. The control unit is part of the larger world.
  • Hence, no C can perfectly control S.

Superintelligence people will no doubt be unsatisfied by this argument. The AI need not be effective in the sense dictated by Ashby’s Law. It need only be capable of outmaneuvering humans. And so on.

However, I believe the argument gets at why it is difficult for complex control systems to ever truly master the world around them. It is very difficult for a control system to have effective control over itself, let alone itself in a larger systemic context, without some kind of order constraining the behavior of the total system (the system including the control unit) imposed from without. The idea that it is possible to gain total mastery or domination through an AI or better data systems is a fantasy because the technical controls adds their own complexity to the world that is to be controlled.

This is a bit of a paradox, as it raises the question of how any control unites work at all. I’ll leave this for another day.

Bridging between transaction cost and traditional economics

Some time ago I was trying to get my head around transaction cost economics (TCE) because of its implications for the digital economy and cybersecurity. (1, 2, 3, 4, 5). I felt like I had a good grasp of the relevant theoretical claim of TCE which is the interaction between asset specificity and the make-or-buy decision. But I didn’t have a good sense of the mechanism that drove that claim.

I worked it out yesterday.

Recall that in the make or buy decision, a firm is determining whether or not to make some product in-house or to buy it from the market. This is a critical decision made by software and data companies, as often these businesses operate by assembling components and data streams into a new kind of service; these services often are the components and data streams used in other firms. And so on.

The most robust claim of TCE is that if the asset (component, service, data stream) is very specific to the application of the firm, then the firm will be more likely to make it. If the asset is more general-purpose, then it buy it as a commodity on the market.

Why is this? TCE does not attempt to describe this phenomenon in a mathematical model, at least as far as I have found. Nevertheless, this can be worked out with a much more general model of the economy.

Assume that for some technical component there are fix costs f and marginal costs $c$. Consider two extreme cases: in case A, the asset is so specific that only one firm will want to buy it. In case B, the asset is very general so there’s many firms that want to purchase it.

In case A, a vendor will have costs of f + c and so will only make the good if the buyer can compensate them at least that much. At the point where the buyer is paying for both the fixed and marginal costs of the product, they might as well own it! If there are other discovered downstream uses for the technology, that’s a revenue stream. Meanwhile, since the vendor in this case will have lock-in power over the buyer (because switching will mean paying the fixed cost to ramp up a new vendor), that gives the vendor market power. So, better to make the asset.

In case B, there’s broader market demand. It’s likely that there’s already multiple vendors in place who have made the fixed cost investment. The price to the buying firm is going to be closer to c, the market price that converges over time to the fixed cost, as opposed to c =+ f, which includes the fixed costs. Because there are multiple vendors, lock-in is not such an issue. Hence the good becomes a commodity.

A few notes on the implications of this for the informational economy:

  • Software libraries have high fixed cost and low marginal cost. The tendency of companies to tilt to open source cores with their products built on top is a natural result of the market. The modularity of open source software is in part explained by the ways “asset specificity” is shaped exogenously by the kinds of problems that need to be solved. The more general the problem, the more likely the solution has been made available open source. Note that there is still an important transaction cost at work here, the search cost. There’s just so many software libraries.
  • Data streams can vary a great deal as to whether and how they are asset specific. When data streams are highly customized to the downstream buyer, they are specific; the customization is both costly to the vendor and adding value to the buyer. However, it’s rarely possible to just “make” data: it needs to be sourced from somewhere. When firms buy data, it is normally in a subscription model that takes into account industrial organization issues (such as lock in) within the pricing.
  • Engineering talent, and related labor costs, are interesting in that for a proprietary system, engineering human capital gains tend to be asset specific, while for open technologies engineering skill is a commodity. The structure of the ‘tech business’, which requires mastery of open technology in order to build upon it a proprietary system, is a key dynamic that drives the software engineering practice.

There are a number of subtleties I’m missing in this account. I mentioned search costs in software libraries. There’s similar costs and concerns about the inherent riskiness of a data product: by definition, a data product is resolving some uncertainty with respect to some other goal or values. It must always be a kind of credence good. The engineering labor market is quite complex in no small part because it is exposed to the complexities of its products.

The ontology of software, revisited

I’m now a software engineer again after many years doing and studying other things. My first-person experience, my phenomenological relationship with this practice, is different this time around. I’ve been meaning to jot down some notes based on that fresh experience. Happily, there’s resonance with topics of my academic focus as well. I’m trying to tease out these connections.

To briefly recap: There’s a recurring academic discourse around technology ethics. Roughly speaking, it starts with a concern about a newish technology that has media or funding agency interest. Articles then get written capitalizing on this hot topic; these articles are fractured according to the disciplinary background of their authors.

  • Engineers try to come up with an improved version of the technology.
  • Lawyers try to come up with ways to regulate the production and use of the technology broadly speaking.
  • Organizational sociologists come up with institutional practices (‘ethics boards’, ‘contestability’) which would prevent the technology from being misused.
  • Critical theorists argue that the technology would be less worrisome if representational desiderata within the field of technology production were better.
  • … and so on.

This is a very active and interesting discourse, but from my (limited) perspective, is rarely impacts industry practice. This isn’t because people in industry don’t care about the ethical implications of their work. It’s because people in industry are engaged full-time in a different discourse. This is the discourse of industry practitioners.

My industrial background is in software development and data science. Obviously there are other kinds of industrial work–hardware, biotech, etc. But it’s fair to say that a great deal of the production of “technology” in the 21st century is, specifically, software development. And my point here is that software development has its own field of discourse that is rich and vivid and a full-time job to keep up with. Here’s some examples of what I’m getting at:

  • There is always-already a huge world of communication between engineers about what technologies are interesting, how to use them effectively, how they compare with prior technologies, the implications of these trends for technical careers, and so on. Browse Hacker News. Look at industry software conferences.
  • There’s also a huge world of industrial discussion about the social practices of software development. A lot of my knowledge of this is a bit dated. But as I come back to industry, I find myself looking back to now Classic sources on how-to-work-effectively-on-software. I’m linking to articles from Joel Spolsky’s blog. I’m ordering a copy of Fred Brooks’s classic The Mythical Man-Month.
  • I’m reading documentation, endlessly, about how to configure and use the various SaaS, IaaS, PaaS, etc. tools that are now necessary parts of full-stack development. When the documentation is limited, I’m engaging with customer service people of technical products, who have their own advice, practices, etc.

This is a complex world of literature and practice. Part of what makes it complex is that it is always-already densely documented and self-referential, enacted by smart and literate people, most of whom are quite socially skilled. It’s people working full-time jobs in a field that is now over 40 years old.

I’ve argued in other posts that if we want to solve the ‘technology ethics’ problem, we should see it as an economic problem. At a high level, I still believe that’s true. I want to qualify that point though, and say: now that I’m back in a more engage position with respect to the field of technical production, I believe there are institutional/organizational ways to address broader social concerns through interventions on engineering practice.

What is missing, in my view, is a sincere engagement with the nitty-gritty of engineering practice itself. I know there are anthropologists who think they do this. I haven’t read anybody who really does it, in their writing, and I believe the reason for that is: anthropologists writing for other academic anthropologists are not going to write what would be actually useful here, which is a guide for product and project management that would likely recapitulate a lot of conventional (but too often ignored) wisdom about software engineering “best practices”–documentation, testing, articulation of use cases, etc. These are the kinds of things that improve technical quality in a real way.

Now that I write this, I recall that the big ethics research teams at, say, Google, do stuff like this. It’s great.


I was going to say something about the ontology of software.

Recall: I have a position on the ontology of data, which I’ve called Situated Information Flow Theory (SIFT). I worked hard on it. According to SIFT, an information flow is a causal flow situated in a network of other causal relations. The meaning of the information depends on that causally defined situation.

What then is software?

“Software” refers to sets of instructions written by people in a specialized “programming” language as text data, which is then interpreted and compiled by a machine. In paradigmatic industrial practice (I’m simplifying, bear with me), ultimately these instructions will be used to control the behavior of a machine that interfaces with the world in a real-time, consequential way. This latter machine is referred to, internally, as being “in production”.

When you’re programming a technical product, first you write software “in development”. You are writing drafts of code. You get your colleagues to review it. You link up the code you wrote to the code the other team wrote and you see if it works together. There is a long and laborious process of building tests for new requirements and fixing the code so that it meets those requirements. There are designs, and redesigns, of internal and external facing features. The complexity of the total task is divided up into modules; the boundaries of those modules shifts over time. The social structure of the team adapts as new modules become necessary.

There is an isomorphism, a well documented phenomenon in organizational social theory, between the technology being created and the social structure that creates it. The team structure mirrors the software architecture.

When the pieces are in place adequately enough–and when the investors/management has grown impatient enough–the software is finally “deployed to production”. It “goes live”. What was an internal exercise is now a process with reputational consequences for the business, as well as possibly real consequences for the users of the technology.

Inevitably, the version of the product “in production” is not complete. There are errors. There are new features requested. So the technology firm now organizes itself around several “cycles” running at different frequencies in parallel. There’s a “development cycle” of writing new software code. There’s a “release cycle” of packaging new improvements into bundles that are documented and tested for quality. The releases are deployed to production on a schedule. Different components may have different development and release cycles. The impedance match or mismatch between these cycles becomes its own source of robustness or risk. (I’ve done some empirical research work on this.)

What does this mean for the ontology of software?

The first thing it means is that the notion that software is a static artifact, something like either a physical object (like a bicycle) or a publication (like a book) is mostly irrelevant to what’s happening. The software production process depends on the fluidity of source code. When software is deployed “as a service”, it’s dubious for it to qualify as a “creative work”, subject to copyright law, except by virtue of legal inertia. Something totally different is going on.

The second thing it means is that the live technical product is an ongoing institutional accomplishment. It’s absurd to ever say that humans are not “in the loop”. This is one of the big insights of the critical/anthro reaction to “Big Tech” in the past five years or so. But it has also been common knowledge within the industry for fifteen years or so.

The third thing it means is that software is the structuring of a system of causal relations. Software, when it’s deployed, determines what causes what. See above for a definition of the the nature of information: it’s a causal flow situated in other causal relations. The link between software and information then is quite clear and direct. Software (as far as it goes) is a definition of a causal situation.

The fourth thing it means is that software products are the result of agreement between people. Software only makes it into production if it has gotten there through agreed-upon processes by the team that deploys it. The strength of software is in the collective input that went into it. In a sense, software is much more like a contract, in legal terms, than it is like a creative work. In the extended network of human and machine actors, software is the result of, the expression of, self-regulation first. Only secondarily does it, in Lessig’s terms, become a regulatory force more broadly.

What is software? Software is a form of social structure.

ethnography is not the only social science tool for algorithmic impact assessment

Quickly responding to Selbst, Elish, and Latonero’s “Accountable Algorithmic Futures“, Data and Society’s response to the Algorithmic Accountability Act of 2019…

The bill would empower the FTC to do “automated decision systems impact assessment” (ADSIA) of automated decision-making systems. The article argues that the devil is in the details and that the way the FTC goes about these assessments will determine their effectiveness.

The point of their article, which I found notable, is to assert the appropriate intellectual discipline for these impact assessments.

This is where social science comes in. To effectively implement the regulations, we believe that engagement with empirical inquiry is critical. But unlike the environmental model, we argue that social sciences should be the primary source of information about impact. Ethnographic methods are key to getting the kind of contextual detail, also known as “thick description,” necessary to understand these dimensions of effective regulation.

I want to flag this as weird.

There is an elision here between “the social sciences” and “ethnographic methods” here, as if there were no social sciences that were not ethnographic. And then “thick description” is implied to be the only source of contextual detail that might be relevant to impact assessments.

This is a familiar mantra, but it’s also plainly wrong. There’s many disciplines and methods within “the social sciences” that aren’t ethnographic, and many ways to get at contextual detail that does not involve “thick description”. There is a worthwhile and interesting intellectual question: what are the appropriate methods for algorithmic impact assessment. The authors of this piece assume an answer to that question without argument.

Neutral, Autonomous, and Pluralistic conceptions of law and technology (Hildebrandt, Smart Technologies, sections 8.1-8.2)

Continuing notes and review of Part III of Hildebrandt’s Smart Technologies and the End(s) of Law, we begin chapter 8, “Intricate entanglements of law and technology”. This chapter culminates in some very interesting claims about the relationship between law and the printing press/text, which I anticipate provide some very substantive conclusions.

But the chapter warms up by a review of philosophical/theoretical positions on law and technology more broadly. Section 8.2. is structured as a survey of these positions, and in an interesting way: Hildebrandt lays out Neutral, Autonomous, and Pluralistic conceptions of both technology and law in parallel. This approach is dialectical. The Neutral and Autonomous conceptions are, Hildebrandt argues, narrow and naive; the Pluralistic conception captures nuances necessary to understand not only what technology and law are, but how they relate to each other.

The Neutral Conception

This is the conception of law and technology as mere instruments. A particular technology is not good or bad, it all depends on how it’s used. Laws are enacted to reach policy aims.

Technologies are judged by their affordances. The goals for which they are used can be judged, separately, using deontology or some other basis for the evaluation of values. Hildebrandt has little sympathy for this view: “I believe that understanding technologies as mere means amounts to taking a naive and even dangerous position”. That’s because, for example, technology can impact the “in-between” of groups and individuals, thereby impacting privacy by its mere usage. This echoes the often cited theme of how artifacts have politics (Winner, 1980): by shaping the social environment by means of their affordances.

Law can also be thought of as neutral instrument. In this case, it is seen as a tool of social engineering, evaluated for its effects. Hildebrandt says this view of law fits “the so-called regulatory paradigm”, which “reigns in policy circles, and also in policy science, which is a social science inclined to take an exclusively external perspective on the law”. The law regulates behavior externally, rather than the actions of citizens internally.

Hildebrandt argues that when law is viewed instrumentally, it is tempting to then propose that the same instrumental effects could be achieved by technical infrastructure. “Techno-regulation is a prime example of what rule by law ends up with; replacing legal regulation with technical regulation may be more efficient and effective, and as long as the default settings are a part of the hidden complexity people simply lack the means to contest their manipulation.” This view is aligned with Lessig’s (2009), which Hildebrandt says is “deeply disturbing”; as it is aligned with “the classical law and economics approach of the Chicago School”, it falls short…somehow. This argument will be explicated in later sections.

Comment

Hildebrandt’s criticism of the neutral conception of technology is that it does not register how technology (especially infrastructure) can have a regulatory effect on social life and so have consequences that can be normatively evaluated without bracketing out the good or bad uses of it by individuals. This narrow view of technology is precisely that which has been triumphed over by scholars like Lessig.

Hildebrandt’s criticism of the neutral conception of law is different. It is that by understanding law primarily by its external effects (“rule by law”) diminishes the true normative force of a more robust legality that sees law as necessarily enacted and performed by people (“Rule of Law”). But nobody would seriously think that “rule by law” is not “neutral” in the same sense that some people think technology is neutral.

The misalignment of these two positions, which are presented as if they are equivalent, obscures a few alternative positions in the logical space of possibilities. There are actually two different views of the neutrality of technology: the naive one that Hildebrandt takes time to dismiss, and the more sophisticated view that technology should be judged by its social effects just as an externally introduced policy ought to be.

Hildebrandt shoots past this view, as developed by Lessig and others, in order to get to a more robust defense of Rule of Law. But it has to be noted that this argument for the equivalence of technology and law within the paradigm of regulation has beneficial implications if taken to its conclusion. For example, in Deirdre Mulligan’s FAT* 2019 keynote, she argued that public sector use of technology, if recognizes as a form of policy, would be subject to transparency and accountability rules under laws like the Administrative Procedure Act.

The Autonomous Conception

In the autonomous conception of technology and law, there is no agent using technology or law for particular ends. Rather, Technology and Law (capitalized) act with their own abstract agency on society.

There are both optimistic and pessimistic views of Autonomous Technology. There is hyped up Big Data Solutionism (BDS), and dystopian views of Technology as the enframing, surveilling, overpowering danger (as in, Heidegger). Hildebrandt argues that these are both naive and dangerous views that prevent us from taking seriously the differences between particular technologies. Hildebrant maintains that particular design decisions in technology matter. We just have to think about the implications of those decisions in a way that doesn’t deny the continued agency involved the continuous improvement, operation, and maintenance of the technology.

Hildebrant associates the autonomous conception of law with legal positivism, the view of law as a valid, existing rule-set that is strictly demarcated from either (a) social or moral norms, or (b) politics. The law is viewed as legal conditions for legal effects, enforced by a sovereign with a monopoly on violence. Law, in this sense, legitimizes the power of the state. It also creates a class of lawyers whose job it is to interpret, but not make, the law.

Hildebrandt’s critique of the autonomous conception of law is that it gives the law too many blind spots. If Law is autonomous, it does not need to concern itself with morality, or with politics, or with sociology, and especially not with the specific technology of Information-Communications Infrastructure (ICI). She does not come out and say this outright, but the implication is that this view of Law is fragile given the way changes in the ICI are rocking the world right now. A more robust view of law would give better tools for dealing with the funk we’re in right now.

The Pluralistic Conception

The third view of technology and law, the one that Hildebrandt endorses, is the “pluralistic” or “relational” view of law. It does not come as a surprise after the exploration of the “neutral” and “autonomous” conceptions.

The way I like to think about this, the pluralistic conception of technology/law, is: imagine that you had to think about technology and law in a realistic way, unburdened by academic argument of any kind. Imagine, for example, a room in an apartment. Somebody built the room. As a consequence of the dimensions of the room, you can fit a certain amount of furniture in it. The furniture has affordances; you can sit at chairs and eat at tables. You might rearrange the furniture sometimes if you want a different lifestyle for yourself, and so on.

In the academic environment, there are branches of scholarship that like to pretend they discovered this totally obvious view of technology for the first time in, like, the 70’s or 80’s. But that’s obviously wrong. As Winner (1980) points out, when Ancient Greeks were building ships, they obviously had to think about how people would work together to row and command the ship, and built it to be functional. Civil engineering, transportation engineering, and architecture are fields that deal with socially impactful infrastructure, and they have to deal with the ways people react, collectively, to what was built. I can say from experience doing agile development of software infrastructure that software engineers, as well, think about their users when they build products.

So, we might call this the “realistic” view–the view that engineers, who are the best situated to understand the processes of producing and maintaining technology, since that’s their life, have.

I’ve never been a lawyer, but I believe one gets to the pluralistic, or relational, view of law in pretty much the same way. You look at how law has actually evolved, historically, and how it has always been wrapped up in politics and morality and ICI’s.

So, in these sections, Hildebrandt drives home in a responsible, scholarly way the fact that neither law nor technology (especially technological infrastructure, and especially ICI) are autonomous–they are historically situated creates of society–and nor are they instrumentally neutral–they do have a form of agency in their own right.As my comment above notes, to me the most interesting part of this chapter was the gaps and misalignment in the section on the Neutral Conception section. This conception seems most aligned with an analytically clear, normative conception of what law and technology are supposed to be doing, which is what makes this perspective enduringly attractive to those who make them. The messiness or the pluralistic view, while more nuanced, does not provide a guide for design.

By sweeping away the Neutral conception of law as instrumental, Hildebrandt preempts arguments that the law might fail to attain its instrumental goals, or that the goals of law might sometimes be attained through infrastructure. In other words, Hildebrandt is trying to avoid a narrow instrumental comparison between law and technology, and highlights instead that they are relationally tied to each other in a way that prevents either from being a substitute for the other.

References

Hildebrandt, Mireille. Smart technologies and the end (s) of law: novel entanglements of law and technology. Edward Elgar Publishing, 2015.

Lessig, Lawrence. Code: And other laws of cyberspace. ReadHowYouWant. com, 2009.

Winner, Langdon. “Do artifacts have politics?.” Daedalus(1980): 121-136.

Antinomianism and purposes as reasons against computational law (Notes on Hildebrandt, Smart Technologies, Sections 7.3-7.4)

Many thanks to Jake Goldenfein for discussing this reading with me and coaching me through interpreting it in preparation for writing this post.

Following up on the discussion of sections 7.1-7.2 of Hildebrandt’s Smart Technologies an the End(s) of Law (2015), this post discusses the next two sections. The main questions left from the last section are:

  • How strong is Hildebrandt’s defense of the Rule of Law, as she explicates it, as worth preserving despite the threats to it that she acknowledges from smart technologies?
  • Is the instrumental power of smart technology (i.e, its predictive function, which for the sake of argument we will accept is more powerful than unassisted human prognostication) somehow a substitute for Law, as in its pragmatist conception?

In sections 7.3-7.4, Hildbrandt discusses the eponymous ends of law. These are not its functions as could be externally and sociologically validated, but rather its internally recognized goals or purposes. And these are not particular goals, such as environmental justice, that we might want particular laws to achieve. Rather, these are abstract goals that the law as an entire ‘regime of veridiction’ aims for. (“Veridiction” means “A statement that is true according to the worldview of a particular subject, rather than objectively true.” The idea is that the law has a coherent worldview of its own.

Hildebrandt’s description of law is robust and interesting. Law “articulates legal conditions for legal effect.” Legal personhood (a condition) entails certain rights under the law (an effect). These causes-and-effects are articulated in language, and this language does real work. In Austin’s terminology, legal language is performative–it performs things at an institutional and social level. Relatedly, the law is experienced as a lifeworld, or Welt, but not a monolithic lifeworld that encompasses all experience, but one of many worlds that we use to navigate reality, a ‘mode of existence’ that ‘affords specific roles, actors and actions while constraining others’. [She uses Latour to make this point, which in my opinion does not help.] It is interesting to compare this view of society with Nissenbaum’s ((2009) view of society differentiated into spheres, constituted by actor roles and norms.

In section 7.3.2, Hildebrandt draws on Gustav Radbruch for his theory of law. Consistent with her preceding arguments, she emphasizes that for Radbruch, law is antinomian, (a strange term) meaning that it is internally contradictory and unruly, with respect to its aims. And there are three such aims that are in tension:

  • Justice. Here, justice is used rather narrowly to mean that equal cases should be treated equally. In other words, the law must be applied justly/fairly across cases. To use her earlier framing, justice/equality implied that legal conditions cause legal effects in a consistent way. In my gloss, I would say this is equivalent to the formality of law, in the sense that the condition-effect rules must address the form of a case, and not treat particular cases differently. More substantively, Hildebrandt argues that Justice breaks down into more specific values: distributive justice, concerning the fair distribution of resources across society, and corrective justice, concerning the righting of wrongs through, e.g., torts.
  • Legal certainty. Legal rules must be binding and consistent, whether or not they achieve justice or purpose. “The certainty of the law requires its positivity; if it cannot be determined what is just, it must be decided what is lawful, and this from a position that is capable of enforcing the decision.” (Radbruch). Certainty about how the law will be applied, whether or not the application of the law is just (which may well be debated), is a good in itself. [A good example of this is law in business, which is famously one of the conditions for the rise of capitalism.]
  • Purpose. Beyond just/equal application of the law across cases and its predictable positivity, the law aims at other purposes such as social welfare, redistribution of income, guarding individual and public security, and so on. None of these purposes is inherent in the law, for Radbruch; but in his conception of law, by its nature it is directed by democratically determined purposes and is instrumental to them. These purposes may flesh out the normative detail that’s missing in a more abstract view of law.

Two moves by Hildebrandt in this section seem particularly substantial to her broader argument and corpus of work.

The first is the emphasis on the contrast between the antinomian conflict between justice, certainty, and purpose with the principle of legal certainty itself. Law, at any particular point in time, may fall short of justice or purpose, and must nevertheless be predictably applied. It also needs to be able to evolve towards its higher ends. This, for Hildebrandt, reinforces the essential ambiguous and linguistic character of law.

[Radbruch] makes it clear that a law that is only focused on legal certainty could not qualify as law. Neither can we expect the law to achieve legal certainty to the full, precisely because it must attend to justice and to purpose. If the attribution of legal effect could be automated, for instance by using a computer program capable of calculating all the relevant circumstances, legal certainty might be achieved. But this can only be done by eliminating the ambiguity that inheres in human language: it would reduce interpretation to mindless application. From Radbruch’s point of view this would fly in the face of the cultural, value-laden mode of existence of the law. It would refute the performative nature of law as an artificial construction that depends on the reiterant attribution of meaning and decision-making by mindful agents.

Hildebrandt, Smart Technologies, p. 149

The other move that seems particular to Hildebrandt is the connection she draws between purpose as one of the three primary ends of law and purpose-binding a feature of governance. The latter has particular relevance to technology law through its use in data protection, such as in the GDPR (which she addresses elsewhere in work like Hildebrandt, 2014). The idea here is that purposes do not just imply a positive direction of action; they also restrict activity to only those actions that support the purpose. This allows for separate institutions to exist in tension with each other and with a balance of power that’s necessary to support diverse and complex functions. Hildebrandt uses a very nice classical mythology reference here

The wisdom of the principle of purpose binding relates to Odysseus’s encounter with the Sirens. As the story goes, the Sirens lured passing sailors with the enchantment of their seductive voices, causing their ships to crash on the rocky coast. Odysseus wished to hear their song without causing a shipwreck; he wanted to have his cake and eat it too. While he has himself tied to the mast, his men have their ears plugged with beeswax. They are ordered to keep him tied tight, and to refuse any orders he gives to the contrary, while being under the spell of the Sirens as they pass their island. And indeed, though he is lured and would have caused death and destruction if his men had not been so instructed, the ship sails on. This is called self-binding. But it is more than that. There is a division of tasks that prevents him from untying himself. He is forced by others to live by his own rules. This is what purpose binding does for a constitutional democracy.

Hildebrandt, Smart Technologies, p. 156

I think what’s going on here is that Hildebrandt understands that actually getting the GDPR enforced over the whole digital environment is going to require a huge extension of the powers of law over business, organization, and individual practice. From some corners, there’s pessimism about the viability of the European data protection approach (Koops, 2014), arguing that it can’t really be understood or implemented well. Hildebrandt is making a big bet here, essentially saying: purpose-binding on data use is just a natural part of the power of law in general, as a socially performed practice. There’s nothing contingent about purpose-binding in the GDPR; it’s just the most recent manifestation of purpose as an end of law.

Commentary

It’s pretty clear what the agenda of this work is. Hildebrandt is defending the Rule of Law as a social practice of lawyers using admittedly ambiguous natural language over the ‘smart technologies’ that threaten it. This involves both a defense of law as being intrinsically about lawyers using ambiguous natural language, and the power of that law over businesses, etc. For the former, Hildebrandt invokes Radbruch’s view that law is antinomian. For the second point, she connects purpose-binding to purpose as an end of law.

I will continue to play the skeptic here. As is suggested in the quoted package, if one takes legal certainty seriously, then one could easily argue that software code leads to more certain outcomes than natural language based rulings. Moreover, to the extent that justice is a matter of legal formality–attention to the form of cases, and excluding from consideration irrelevant content–then that too weighs in favor of articulation of law in formal logic, which is relatively easy to translate into computer code.

Hildebrandt seems to think that there is something immutable about computer code, in a way that natural language is not. That’s wrong. Software is not built like bridges; software today is written by teams working rapidly to adapt it to many demands (Gürses and Hoboken, 2017). Recognizing this removes one of the major planks of Hildebrandt’s objection to computational law.

It could be argued that “legal certainty” implies a form of algorithmic interpretability: the key question is “certain for whom”. An algorithm that is opaque due to its operational complexity (Burrell, 2016) could, as an implementation of a legal decision, be less predictable to non-specialists than a simpler algorithm. So the tension in a lot of ‘algorithmic accountability’ literature between performance and interpretability would then play directly into the tension, within law, between purpose/instrumentality and certainty-to-citizens.

Overall, the argument here is not compelling yet as a refutation of the idea of law implemented as software code.

As for purpose-binding and the law, I think this may well be the true crux. I wonder if Hildebrandt develops it later in the book. There are not a lot of good computer science models of purpose binding. Tschantz, Datta, and Wing (2012) do a great job mapping out the problem but that research program has not resulted in robust technology for implementation. There may be deep philosophical/mathematical reasons why that is so. This is an angle I’ll be looking out for in further reading.

References

Burrell, Jenna. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms.” Big Data & Society3.1 (2016): 2053951715622512.

Gürses, Seda, and Joris Van Hoboken. “Privacy after the agile turn.” The Cambridge Handbook of Consumer Privacy. Cambridge Univ. Press, 2017. 1-29.

Hildebrandt, Mireille. “Location Data, Purpose Binding and Contextual Integrity: What’s the Message?.” Protection of Information and the Right to Privacy-A New Equilibrium?. Springer, Cham, 2014. 31-62.

Hildebrandt, Mireille. Smart technologies and the end (s) of law: novel entanglements of law and technology. Edward Elgar Publishing, 2015.

Koops, Bert-Jaap. “The trouble with European data protection law.” International Data Privacy Law 4.4 (2014): 250-261.

Nissenbaum, Helen. Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press, 2009.

Tschantz, Michael Carl, Anupam Datta, and Jeannette M. Wing. “Formalizing and enforcing purpose restrictions in privacy policies.” 2012 IEEE Symposium on Security and Privacy. IEEE, 2012.

Response to Abdurahman

Abdurahman has responded to my response to her tweet about my paper with Bruce Haynes, and invited me to write a rebuttal. While I’m happy to do so–arguing with intellectuals on the internet is probably one of my favorite things to do–it is not easy to rebut somebody with whom you have so little disagreement.

Abdurahman makes a number of points:

  1. Our paper, “Racial categories in machine learning”, omits the social context in which algorithms are enacted.
  2. The paper ignores whether computational thinking “acolytes like [me]” should be in the position of determining civic decisions.
  3. That the ontological contributions of African American Vernacular English (AAVE) are not present in the FAT* conference and that constitutes a hermeneutic injustice. (I may well have misstated this point).
  4. The positive reception to our paper may be due to its appeal to people with a disingenuous, lazy, or uncommitted racial politics.
  5. “Participatory design” does not capture Abdurahman’s challenge of “peer” design. She has a different and more broadly encompassing set of concerns: “whose language is used, whose viewpoint and values are privileged, whose agency is extended, and who has the right to frame the “problem”.”
  6. That our paper misses the point about predictive policing, from the perspective of people most affected by disparities in policing. Machine learning classification is not the right frame of the problem. The problem is an unjust prison system and, more broadly the unequal distribution of power that is manifested in the academic discourse itself. “[T]he problem is framed wrongly — it is not just that classification systems are inaccurate or biased, it is who has the power to classify, to determine the repercussions / policies associated thereof and their relation to historical and accumulated injustice?”

I have to say that I am not a stranger most of this line of thought and have great sympathy for the radical position expressed.

I will continue to defend our paper. Re: point 1, a major contribution of our paper was that it shed light on the political construction of race, especially race in the United States, which is absolutely part of “the social context in which algorithmic decision making is enacted”. Abdurahman must be referring to some other aspect of the social context. One problem we face as academic researchers is that the entire “social context” of algorithmic decision-making is the whole frickin’ world, and conference papers are about 12 pages or so. I thought we did a pretty good job of focusing on one, important and neglected aspect of that social context, the political formation of race, which as far as I know has never previously been addressed in a computer science paper. (I’ve written more about this point here).

Re: point 2, it’s true we omit a discussion of the relevance of computational thinking to civic decision-making. That is because this is a safe assumption to make in a publication to that venue. I happen to agree with that assumption, which is why I worked hard to submit a paper to that conference. If I didn’t think computational thinking was relevant, I probably would be doing something else with my time. That said, I think it’s wildly flattering and inaccurate to say that I, personally, have any control over “civic decision-making”. I really don’t, and I’m not sure why you’d think that, except for the erroneous myth that computer science research is, in itself, political power. It isn’t; that’s a lie that the tech companies have told the world.

I am quite aware (re: point 3) that my embodied and social “location” is quite different from Abdurahman’s. For example, unlike Abdurahman, it would be utterly pretentious for me to posture or “front” with AAVE. I simply have no access to its ontological wisdom, and could not be the conduit of it into any discourse, academic or informal. I have and use different resources; I am also limited by my positionality like anybody else. Sorry.

“Woke” white liberals potentially liking our argument? (Re: point 4) Fair. I don’t think that means our argument is bad or that the points aren’t worth making.

Re: point 5: I must be forgiven for not understanding the full depth of Abdurahman’s methodological commitments on the basis of a single tweet. There are a lot of different design methodologies and their boundaries are disputed. I see now that the label of “participatory design” is not sufficiently critical or radical enough to capture what she has in mind. I’m pleased to see she is working with Tap Parikh on this, who has a lot of experience with critical/radical HCI methods. I’m personally not an expert on any of this stuff. I do different work.

Re: point 6: My personal opinions about the criminal justice system did not make it into our paper, which again was a focused scientific article trying to make a different point. Our paper was about how racial categories are formed, how they are unfair, and how a computational system designed for fairness might address that problem. I agree that this approach is unlikely to have much meaningful impact on the injustices of the cradle-to-prison system in the United States, the prison-industrial complex, or the like. Based on what I’ve heard so far, the problems there would be best solved by changing the ways judges are trained. I don’t have any say in that, though–I don’t have a law degree.

In general, while I see Abdurahman’s frustrations as valid (of course!), I think it’s ironic and frustrating that she targets our paper as an emblem of the problems with the FAT* conference, with computer science, and with the world at large. First, our paper was not a “typical” FAT* paper; it was a very unusual one, positioned to broaden the scope of what’s discussed there, motivated in part by my own criticisms of the conference the year before. It was also just one paper: there’s tons of other good work at that conference, and the conversation is quite broad. I expect the best solution to the problem is to write and submit different papers. But it may also be that other venues are better for addressing the problems raised.

I’ll conclude that many of the difficulties and misunderstandings that underlie our conversation are a result of a disciplinary collapse that is happening because of academia’s relationship with social media. Language’s meaning depends on its social context, and social media is notoriously a place where contexts collapse. It is totally unreasonable to argue that everybody in the world should be focused on what you think is most important. In general, I think battles over “framing” on the Internet are stupid, and that the fact that these kinds of battles have become so politically prominent is a big part of why our society’s politics are so stupid. The current political emphasis on the symbolic sphere is a distraction from more consequential problems of economic and social structure.

As I’ve noted elsewhere, one reason why I think Haynes’s view of race is refreshing (as opposed to a lot of what passes for “critical race theory” in popular discussion) is that it locates the source of racial inequality in structure–spatial and social segregation–and institutional power–especially, the power of law. In my view, this politically substantive view of race is, if taken seriously, more radical than one based on mere “discourse” or “fairness” and demands a more thorough response. Codifying that response, in computational thinking, was the goal of our paper.

This is a more concrete and specific way of dealing with the power disparities that are at the heart of Abdurahman’s critique. Vague discourse and intimations about “privilege”, “agency”, and “power”, without an account of the specific mechanisms of that power, are weak.