Digifesto

Williamson on four injunctions for good economics

Williamson (2008) (pdf) concludes with a description of four injunctions for doing good economics, which I will quote verbatim.

Robert Solow’s prescription for doing good economics is set out in three injunctions: keep it simple; get it right; make it plausible (2001, p. 111). Keeping it simple entails stripping away the inessentials and going for the main case (the jugular). Getting it right “includes translating economic concepts into accurate
mathematics (or diagrams, or words) and making sure that further logical operations are correctly performed and verified” (Solow, 2001, p. 112). Making it plausible entails describing human actors in (reasonably) veridical ways and maintaining meaningful contact with the phenomena of interest (contractual or otherwise).

To this, moreover, I would add a fourth injunction: derive refutable implications to which the relevant (often microanalytic) data are brought to bear. Nicholas Georgescu-Roegen has a felicitous way of putting it: “The purpose of science in general is not prediction, but knowledge for its own sake,” yet prediction is “the touchstone of scientific knowledge” (1971, p. 37).

Why the fourth injunction? This is necessitated by the need to choose among alternative theories that purport to deal with the same phenomenon—say vertical integration—and (more or less) satisfy the first three injunctions. Thus assume that all of the models are tractable, that the logic of each hangs together, and that agreement cannot be reached as to what constitutes veridicality and meaningful contact with the phenomena. Does each candidate theory then have equal claimsfor our attention? Or should we be more demanding? This is where refutable implications and empirical testing come in: ask each would-be theory to stand up and be counted.

Why more economists are not insistent upon deriving refutable implications and submitting these to empirical tests is a puzzle. One possibility is that the world of theory is set apart and has a life of its own. A second possibility is that some economists do not agree that refutable implications and testing are
important. Another is that some theories are truly fanciful and their protagonists would be discomfited by disclosure. A fourth is that the refutable implications of favored theories are contradicted by the data. And perhaps there are still other reasons. Be that as it may, a multiplicity of theories, some of which are
vacuous and others of which are fanciful, is an embarrassment to the pragmatically oriented members of the tribe. Among this subset, insistence upon the fourth injunction—derive refutable implications and submit these to the data—is growing.

References

Williamson, Oliver E. “Transaction cost economics.” Handbook of new institutional economics. Springer, Berlin, Heidelberg, 2008. 41-65.

Discovering transaction cost economics (TCE)

I’m in the process of discovering transaction cost economics (TCE), the branch of economics devoted to the study of transaction costs, which include bargaining and search costs. Oliver Williamson, who is a professor at UC Berkeley, won the Nobel Prize for his work on TCE in 2009. I’m starting with the Williamson, 2008 article (in the References) which seems like a late-stage overview of what is a large body of work.

Personally, this is yet another time when I’ve discovered that the answers or proper theoretical language for understanding something I am struggling with has simply been Somewhere Else all alone. Delight and frustration are pretty much evening each other out at this point.

Why is TCE so critical (to me)?

  • I think the real story about how the Internet and AI have changed things, which is the topic constantly reiterated in so many policy and HCI studies about platforms, is that they reduced search costs. However, it’s hard to make the case for that without a respectable theorization of search costs and how they matter to the economy. This, I think, what transaction cost economics are about.
  • You may recall I wrote my doctoral dissertation about “data economics” on the presumption (which was, truly, presumptuous) that a proper treatment of the role of data in the economy had not yet been done. This was due mainly to the deficiencies of the discussion of information in neoclassical economic theory. But perhaps I was a fool, because it may be that this missing-link work on information economics has been in transaction cost economics all along! Interestingly, Pat Bajari, who is Chief Economist at Amazon, has done some TCE work, suggesting that like Hal Varian’s economics, this is stuff that actually works in a business context, which is more or less the epistemic standard you want economics to meet. (I would argue that economics should be seen, foremost, as a discipline of social engineering.)
  • A whole other line of research I’ve worked on over the years has been trying to understand the software supply chain, especially with respect to open source software (Benthall 2016; Benthall, 2017). That’s a tricky topic because the idea of “supply” and “chain” in that domain are both highly metaphorical and essentially inaccurate. Yet there are clearly profound questions about the relationships between sociotechnical organizations, their internal and external complexity, and so on to be found there, along with (and this is really what’s exciting about it) ample empirical basis to support arguments about it, just by the nature of it. Well, it turns out that the paradigmatic case for transaction cost economics is vertical integration, or the “make-or-buy” decision wherein a firm decides to (A) purchase it from an open market, (D) produce something in-house, or (C) (and this is the case that transaction cost economics really tries to develop) engage with the supplier in a contract which creates an ongoing and secure relationship between them. Labor contracts are all, for reasons that I may go into later, of this (C) kind.

So, here comes TCE, with its firm roots in organization theory, Hayekian theories of the market, Coase’s and other theories of the firm, and firm emphasis on the supply chain relation between sociotechnical organizations. And I HAVEN’T STUDIED IT. There is even solid work on its relation to privacy done by Whittington and Hoofnagle (2011; 2013). How did I not know about this? Again, if I were not so delighted, I would be livid.

Please expect a long series of posts as I read through the literature on TCE and try to apply it to various cases of interest.

References

Benthall, S. (2017) Assessing Software Supply Chain Risk Using Public Data. IEEE STC 2017 Software Technology Conference.

Benthall, S., Pinney, T., Herz, J., Plummer, K. (2016) An Ecological Approach to Software Supply Chain Risk Management. Proceedings of the 15th Python in Science Conference. p. 136-142. Ed. Sebastian Benthall and Scott Rostrup.

Hoofnagle, Chris Jay, and Jan Whittington. “Free: accounting for the costs of the internet’s most popular price.” UCLA L. Rev. 61 (2013): 606.

Whittington, Jan, and Chris Jay Hoofnagle. “Unpacking Privacy’s Price.” NCL Rev. 90 (2011): 1327.

Williamson, Oliver E. “Transaction cost economics.” Handbook of new institutional economics. Springer, Berlin, Heidelberg, 2008. 41-65.

For fairness in machine learning, we need to consider the unfairness of racial categorization

Pre-prints of papers accepted to this coming 2019 Fairness, Accountability, and Transparency conference are floating around Twitter. From the looks of it, many of these papers add a wealth of historical and political context, which I feel is a big improvement.

A noteworthy paper, in this regard, is Hutchinson and Mitchell’s “50 Years of Test (Un)fairness: Lessons for Machine Learning”, which puts recent ‘fairness in machine learning’ work in the context of very analogous debates from the 60’s and 70’s that concerned the use of testing that could be biased due to cultural factors.

I like this paper a lot, in part because it is very thorough and in part because it tees up a line of argument that’s dear to me. Hutchinson and Mitchell raise the question of how to properly think about fairness in machine learning when the protected categories invoked by nondiscrimination law are themselves social constructs.

Some work on practically assessing fairness in ML has tackled the problem of using race as a construct. This echoes concerns in the testing literature that stem back to at least 1966: “one stumbles immediately over the scientific difficulty of establishing clear yardsticks by which people can be classified into convenient racial categories” [30]. Recent approaches have used Fitzpatrick skin type or unsupervised clustering to avoid racial categorizations [7, 55]. We note that the testing literature of the 1960s and 1970s frequently uses the phrase “cultural fairness” when referring to parity between blacks and whites.

They conclude that this is one of the areas where there can be a lot more useful work:

This short review of historical connections in fairness suggest several concrete steps forward for future research in ML fairness: Diving more deeply into the question of how subgroups are defined, suggested as early as 1966 [30], including questioning whether subgroups should be treated as discrete categories at all, and how intersectionality can be modeled. This might include, for example, how to quantify fairness along one dimension (e.g., age) conditioned on another dimension (e.g., skin tone), as recent work has begun to address [27, 39].

This is all very cool to read, because this is precisely the topic that Bruce Haynes and I address in our FAT* paper, “Racial categories in machine learning” (arXiv link). The problem we confront in this paper is that the racial categories we are used to using in the United States (White, Black, Asian) originate in the white supremacy that was enshrined into the Constitution when it was formed and perpetuated since then through the legal system (with some countervailing activity during the Civil Rights Movement, for example). This puts “fair machine learning” researchers in a bind: either they can use these categories, which have always been about perpetuating social inequality, or they can ignore the categories and reproduce the patterns of social inequality that prevail in fact because of the history of race.

In the paper, we propose a third option. First, rather than reify racial categories, we propose breaking race down into the kinds of personal features that get inscribed with racial meaning. Phenotype properties like skin type and ocular folds are one such set of features. Another set are events that indicate position in social class, such as being arrested or receiving welfare. Another set are facts about the national and geographic origin of ones ancestors. These facts about a person are clearly relevant to how racial distinctions are made, but are themselves more granular and multidimensional than race.

The next step is to detect race-like categories by looking at who is segregated from each other. We propose an unsupervised machine learning technique that works with the distribution of the phenotype, class, and ancestry features across spatial tracts (as in when considering where people physically live) or across a social network (as in when considering people’s professional networks, for example). Principal component analysis can identify what race-like dimensions capture the greatest amounts of spatial and social separation. We hypothesize that these dimensions will encode the ways racial categorization has shaped the social structure in tangible ways; these effects may include both politically recognized forms of discrimination as well as forms of discrimination that have not yet been surfaced. These dimensions can then be used to classify people in race-like ways as input to fairness interventions in machine learning.

A key part of our proposal is that race-like classification depends on the empirical distribution of persons in physical and social space, and so are not fixed. This operationalizes the way that race is socially and politically constructed without reifying the categories in terms that reproduce their white supremacist origins.

I’m quite stoked about this research, though obviously it raises a lot of serious challenges in terms of validation.

Is competition good for cybersecurity?

A question that keeps coming up in various forms, but for example in response to recent events around the ‘trade war’ between the U.S. and China and its impact on technology companies, is whether or not market competition is good or bad for cyber-security.

Here is a simple argument for why competition could be good for cyber-security: The security of technical products is a positive quality of them, something that consumers would like. Market competition is what gets producers to make higher quality products at lower cost. Therefore, competition is good for security.

Here is an argument for why competition could be bad for cyber-security: Security is a hard thing for any consumer to understand; since most won’t, we have an information asymmetry here and therefore a ‘market for lemons’ kind of market failure. Therefore, competition is bad for security. It would be better to have a well-regulated monopoly.

This argument echoes, though it doesn’t exactly parallel, some of the arguments in Pasquale’s work on Hamiltonian’s and Jeffersonian’s in technology platform regulation.

“the privatization of public functions”

An emerging theme from the conference on Trade Secrets and Algorithmic Systems was that legal scholars have become concerned about the privatization of public functions. For example, the use of proprietary risk assessment tools instead of the discretion of judges who are supposed to be publicly accountable is a problem. More generally, use of “trade secrecy” in court settings to prevent inquiry into software systems is bogus and moves more societal control into the realm of private ordering.

Many remedies were proposed. Most involved some kind of disclosure and audit to experts. The most extreme form of disclosure is making the software and, where it’s a matter of public record, training data publicly available.

It is striking to me to be encountering the call for government use of open source systems because…this is not a new issue. The conversation about federal use of open source software was alive and well over five years ago. Then, the arguments were about vendor lock-in; now, they are about accountability of AI. But the essential problem of whether core governing logic should be available to public scrutiny, and the effects of its privatization, have been the same.

If we are concerned with the reliability of a closed and large-scale decision-making process of any kind, we are dealing with problems of credibility, opacity, and complexity. The prospects of an efficient market for these kinds of systems are dim. These market conditions are the conditions of sustainability of open source infrastructure. Failures in sustainability are manifest as software vulnerabilities, which are one of the key reasons why governments are warned against OSS now, though the process of measurement and evaluation of OSS software vulnerability versus proprietary vulnerabilities is methodologically highly fraught.

Trade secrecy, “an FDA for algorithms”, a software bills of materials (SBOM) #SecretAlgos

At the Conference on Trade Secrets and Algorithmic Systems at NYU today, the target of most critiques is the use of trade secrecy by proprietary technology providers to prevent courts and the public from seeing the inner workings of algorithms that determine people’s credit scores, health care, criminal sentencing, and so on. The overarching theme is that sometimes companies will use trade secrecy to hide the ways that their software is bad, and that that is a problem.

In one panel, the question of whether an “FDA for Algorithms” is on the table–referring the Food and Drug Administration’s approval of pharmaceuticals. It was not dealt with in too much depth, which is too bad, because it is a nice example of how government oversight of potentially dangerous technology is managed in a way that respects trade secrecy.

According to this article, when filing for FDA approval, a company can declare some of their ingredients to be trade secrets. The upshot of that is that those trade secrets are not subject to FOIA requests. However, these ingredients are still considered when approval is granted by the FDA.

It so happens that in the cybersecurity policy conversation (more so than in privacy) the question of openness of “ingredients” to inspection has been coming up in a serious way. NTIA has been hosting multistakeholder meetings about standards and policy around Software Component Transparency. In particular they are encouraging standardizations of Software Bills of Materials (SBOM) like the Linux Foundation’s Software Package Data Exchange (SPDX). SPDX (and SBOM’s more generally) describe the “ingredients” in a software package at a higher level of resolution than exposing the full source code, but at a level specific enough useful for security audits.

It’s possible that a similar method could be used for algorithmic audits with fairness (i.e., nondiscrimination compliance) and privacy (i.e., information sharing to third-parties) in mind. Particular components could be audited (perhaps in a way that protects trade secrecy), and then those components could be listed as “ingredients” by other vendors.

The paradox of ‘data markets’

We often hear that companies are “selling out data”, or that we are “paying for services” with our data. Data brokers literally buy and sell data about people. There are other forms of expensive data sources or data sets. There is, undoubtedly, one or more data markets.

We know that classically, perfect competition in markets depends on perfect information. Buyers and sellers on the market need to have equal and instantaneous access to information about utility curves and prices in order for the market to price things efficiently.

Since the bread and butter of the data market is information asymmetry, we know that data markets can never be perfectly competitive. If it was, the data market would cease to exist, because the perfect information condition would entail that there is nothing to buy and sell.

Data markets therefore have to be imperfectly competitive. But since these are the markets that perfect information in other markets might depend on, this imperfection is viral. The vicissitudes of the data market are the vicissitudes of the economy in general.

The upshot is that the challenges of information economics are not only those that appear in special sectors like insurance markets. They are at the heart of all economic activity, and there are no equilibrium guarantees.

The Crevasse: a meditation on accountability of firms in the face of opacity as the complexity of scale

To recap:

(A1) Beneath corporate secrecy and user technical illiteracy, a fundamental source of opacity in “algorithms” and “machine learning” is the complexity of scale, especially scale of data inputs. (Burrell, 2016)

(A2) The opacity of the operation of companies using consumer data makes those consumers unable to engage with them as informed market actors. The consequence has been a “free fall” of market failure (Strandburg, 2013).

(A3) Ironically, this “free” fall has been “free” (zero price) for consumers; they appear to get something for nothing without knowing what has been given up or changed as a consequence (Hoofnagle and Whittington, 2013).

Comments:

(B1) The above line of argument conflates “algorithms”, “machine learning”, “data”, and “tech companies”, as is common in the broad discourse. That this conflation is possible speaks to the ignorance of the scholarly position on these topics, and ignorance that is implied by corporate secrecy, technical illiteracy, and complexity of scale simultaneously. We can, if we choose, distinguish between these factors analytically. But because, from the standpoint of the discourse, the internals are unknown, the general indication of a ‘black box’ organization is intuitively compelling.

(B1a) Giving in to the lazy conflation is an error because it prevents informed and effective praxis. If we do not distinguish between a corporate entity and its multiple internal human departments and technical subsystems, then we may confuse ourselves into thinking that a fair and interpretable algorithm can give us a fair and interpretable tech company. Nothing about the former guarantees the latter because tech companies operate in a larger operational field.

(B2) The opacity as the complexity of scale, a property of the functioning of machine learning algorithms, is also a property of the functioning of sociotechnical organizations more broadly. Universities, for example, are often opaque to themselves, because of their own internal complexity and scale. This is because the mathematics governing opacity as a function of complexity and scale are the same in both technical and sociotechnical systems (Benthall, 2016).

(B3) If we discuss the complexity of firms, as opposed the the complexity of algorithms, we should conclude that firms that are complex due to scale of operations and data inputs (including number of customers) will be opaque and therefore have strategic advantage in the market against less complex market actors (consumers) with stiffer bounds on rationality.

(B4) In other words, big, complex, data rich firms will be smarter than individual consumers and outmaneuver them in the market. That’s not just “tech companies”. It’s part of the MO of every firm to do this. Corporate entities are “artificial general intelligences” and they compete in a complex ecosystem in which consumers are a small and vulnerable part.

Twist:

(C1) Another source of opacity in data is that the meaning of data come from the causal context that generates it. (Benthall, 2018)

(C2) Learning causal structure from observational data is hard, both in terms of being data-intensive and being computationally complex (NP). (c.f. Friedman et al., 1998)

(C3) Internal complexity, for a firm, is not sufficient to be “all-knowing” about the data that is coming it; the firm has epistemic challenges of secrecy, illiteracy, and scale with respect to external complexity.

(C4) This is why many applications of machine learning are overrated and so many “AI” products kind of suck.

(C5) There is, in fact, an epistemic crevasse between all autonomous entities, each containing its own complexity and constituting a larger ecological field that is the external/being/environment for any other autonomy.

To do:

The most promising direction based on this analysis is a deeper read into transaction cost economics as a ‘theory of the firm’. This is where the formalization of the idea that what the Internet changed most are search costs (a kind of transaction cost) should be.

It would be nice if those insights could be expressed in the mathematics of “AI”.

There’s still a deep idea in here that I haven’t yet found the articulation for, something to do with autopoeisis.

References

Benthall, Sebastian. (2016) The Human is the Data Science. Workshop on Developing a Research Agenda for Human-Centered Data Science. Computer Supported Cooperative Work 2016. (link)

Sebastian Benthall. Context, Causality, and Information Flow: Implications for Privacy Engineering, Security, and Data Economics. Ph.D. dissertation. Advisors: John Chuang and Deirdre Mulligan. University of California, Berkeley. 2018.

Burrell, Jenna. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms.” Big Data & Society 3.1 (2016): 2053951715622512.

Friedman, Nir, Kevin Murphy, and Stuart Russell. “Learning the structure of dynamic probabilistic networks.” Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc., 1998.

Hoofnagle, Chris Jay, and Jan Whittington. “Free: accounting for the costs of the internet’s most popular price.” UCLA L. Rev. 61 (2013): 606.

Strandburg, Katherine J. “Free fall: The online market’s consumer preference disconnect.” U. Chi. Legal F. (2013): 95.

open source sustainability and autonomy, revisited

Some recent chats with Chris Holdgraf and colleagues at NYU interested in “critical digital infrastracture” have gotten me thinking again about the sustainability and autonomy of open source projects again.

I’ll admit to having had naive views about this topic in the past. Certainly, doing empirical data science work on open source software projects has given me a firmer perspective on things. Here are what I feel are the hardest earned insights on the matter:

  • There is tremendous heterogeneity in open source software projects. Almost all quantitative features of these projects fall in log-normal distributions. This suggests that the keys to open source software success are myriad and exogenous (how the technology fits in the larger ecosystem, how outside funding and recognition is accomplished, …) rather than endogenous factors (community policies, etc.) While many open source projects start as hobby and unpaid academic projects, those that go on to be successful find one or more funding sources. This funding is an exogenous factor.
  • The most significant exogenous factors to an open source software project’s success are the industrial organization of private tech companies. Developing an open technology is part of the strategic repertoire of these companies: for example, to undermine the position of a monopolist, developing an open source alternative decreases barriers to market entry and allows for a more competitive field in that sector. Another example: Google funded Mozilla for so long arguably to deflect antitrust action over Google Chrome.
  • There is some truth to Chris Kelty’s idea of open source communities as recursive publics, cultures that have autonomy that can assert political independence at the boundaries of other political forces. This autonomy comes from: the way developers of OSS get specific and valuable human capital in the process of working with the software and their communities; the way institutions begin to depend on OSS as part of their technical stack, creating an installed base; and how many different institutions may support the same project, creating competition for the scarce human capital of the developers. Essentially, at the point where the software and the skills needed to deploy it effectively and the community of people with those skills is self-organized, the OSS community has gained some economic and political autonomy. Often this autonomy will manifest itself in some kind of formal organization, whether a foundation, a non-profit, or a company like Redhat or Canonical or Enthought. If the community is large and diverse enough it may have multiple organizations supporting it. This is in principle good for the autonomy of the project but may also reflect political tensions that can lead to a schism or fork.
  • In general, since OSS development is internally most often very fluid, with the primary regulatory mechanism being the fork, the shape of OSS communities is more determined by exogenous factors than endogenous ones. When exogenous demand for the technology rises, the OSS community can find itself with a ‘surplus’, which can be channeled into autonomous operations.

What proportion of data protection violations are due to “dark data” flows?

“Data protection” refers to the aspect of privacy that is concerned with the use and misuse of personal data by those that process it. Though widely debated, scholars continue to converge (e.g.) on ideal data protection consisting of alignment between the purposes the data processor will use the data for and the expectations of the user, along with collection limitations that reduce exposure to misuse. Through its extraterritorial enforcement mechanism, the GDPR has threatened to make these standards global.

The implication of these trends is that there will be a global field of data flows regulated by these kinds of rules. Many of the large and important actors that process user data can be held accountable to the law. Privacy violations by these actors will be due to a failure to act within the bounds of the law that applies to them.

On the other hand, there is also cybercrime, an economy of data theft and information flows that exists “outside the law”.

I wonder what proportion of data protection violations are due to dark data flows–flows of personal data that are handled by organizations operating outside of any effective regulation.

I’m trying to draw an analogy to a global phenomenon that I know little about but which strikes me as perhaps more pressing than data protection: the interrelated problems of money laundering, off-shore finance, and dark money contributions to election campaigns. While surely oversimplifying the issue, my impression is that the network of financial flows can be divided into those that are more and less regulated by effective global law. Wealth seeks out these opportunities in the dark corners.

How much personal data flows in these dark networks? And how much is it responsible for privacy violations around the world? Versus how much is data protection effectively in the domain of accountable organizations (that may just make mistakes here and there)? Or is the dichotomy false, with truly no firm boundary between licit and illicit data flow networks?