Digifesto

Tag: informational capitalism

Regulating infoglut?

In the 20’s, many people were attracted for the first time in investing in the stock market. It was a time when fortunes were made and lost, but made more than they were lost, and so on average investors saw large returns. However, the growth in value of stocks was driven in part, and especially in the later half of the decade, by debt. The U.S. Federal Reserve chose to lower interest rates, making it easier to borrow money. When the interest rates on loans were lower than the rates of return on stocks, everybody from households to brokers began to take on debt to reinvest in the stock market. (Brooks, 1999)

After the crash of ’29, which left the economy decimated, there was a reckoning, leading to the Securities Act of 1933 and the Securities Exchange Act of 1934. The latter established the Securities and Exchange Commission (SEC), and established the groundwork for the more trusted financial institutions we have today.

Cohen (2016) writes about a more current economic issue. As the economy changes from being centered on industrial capitalism to informational capitalism, the infrastructural affordances of modern computing and networking have invalidated the background logic of how many regulations are supposed to work. For example, anti-discrimination regulation is designed to prevent decisions from being made based on protected or sensitive attributes of individuals. However, those regulations made most sense when personal information was relatively scarce. Today, when individual activity is highly instrumented by pervasive computing infrastructure, we suffer from infoglut — more information than is good for us, either as individuals or as a society. As a consequence, proxies of protected attributes are readily available for decision-makers and indeed are difficult to weed out of a machine learning system even when market actors fully intend to do so (see Datta et al., 2017). In other words, the structural conditions that enable infoglut erode rights that we took for granted in the absence of today’s network and computing systems.

In an ongoing project with Salome Viljoen, we are examining the parallels between the financial economy and the data economy. These economies are, of course, not fully distinct. However, they are distinguished in part by how they are regulated: the financial economy has over a century of matured regulations defining it and reducing system risks such as those resulting from a debt-financed speculative bubble; the data economy has emerged only recently as a major source of profit with perhaps unforeseen systemic risks.

We have an intuition that we would like to pin down more carefully as we work through these comparisons: that there is something similar about the speculative bubbles that led to the Great Depression and today’s infoglut. In a similar vein to prior work looking that uses regulatory analogy to motivate new thinking about data regulation (Hirsch, 2013; Froomkin, 2015) and professional codes (Stark and Hoffman, 2019), we are interested in how financial regulation may be a precedent for regulation of the data economy.

However, we have reason to believe that the connections between finance and personal data are not merely metaphorical. Indeed, finance is an area with well-developed sectoral privacy laws that guarantee the confidentiality of personal data (Swire, 2003); it is also the case that financial institutions are one of the many ways personal data originating from non-financial contexts is monetized. We do not have to get poetic to see how these assets are connected; they are related as a matter of fact.

What is more elusive, and at this point only a hypothesis, is that there is valid sense in which the systemic risks of infoglut can be conceptually understood using tools similar to those that are used to understand financial risk. Here I maintain an ambition: that systemic risk due to infoglut may be understood using the tools of macroeconomics and hence internalized via technocratic regulatory mechanisms. This would be a departure from Cohen (2016), who gestures more favorably towards “uncertainty” based regulation that does not attempt probabilistic expectation but rather involves tools such as threat modeling, as used in some cybersecurity practices.

References

Brooks, J. (1999). Once in Golconda: A true drama of Wall Street 1920-1938. John Wiley & Sons.

Cohen, J. E. (2016). The regulatory state in the information age. Theoretical Inquiries in Law17(2), 369-414.

Datta, A., Fredrikson, M., Ko, G., Mardziel, P., & Sen, S. (2017, October). Use privacy in data-driven systems: Theory and experiments with machine learnt programs. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (pp. 1193-1210).

Froomkin, A. M. (2015). Regulating Mass Surveillance as Privacy Pollution: Learning from Environmental Impact Statements. U. Ill. L. Rev., 1713.

Hirsch, D. D. (2013). The glass house effect: Big Data, the new oil, and the power of analogy. Me. L. Rev.66, 373.

Stark, L., & Hoffmann, A. L. (2019). Data is the new what? Popular metaphors & professional ethics in emerging data culture.

Swire, P. P. (2003). Efficient confidentiality for privacy, security, and confidential business information. Brookings-Wharton Papers on Financial Services2003(1), 273-310.

Surden, H. (2007). Structural rights in privacy. SMUL Rev.60, 1605.

A note towards formal modeling of informational capitalism

Cohen’s Between Truth and Power (2019) is enormously clarifying on all issues of the politics of AI, etc.

“The data refinery is only secondarily an apparatus for producing knowledge; it is principally an apparatus for producing wealth.”

– Julie Cohen, Between Truth and Power, 2019

Cohen lays out the logic of informational capitalism in comprehensive detail. Among her authoritatively argued points is that scholarly consideration of platforms, privacy, data science, etc. has focused on the scientific and technical accomplishments undergirding the new information economy, but that really its key institutions, the platform and the data refinery, are first and foremost legal and economic institutions. They exist as businesses; they are designed to “extract surplus”.

I am deeply sympathetic to this view. I’ve argued before that the ethical and political questions around AI are best looked at by considering computational institutions (1, 2). I think getting to the heart of the economic logic is the best way to understand the political and moral concerns raised by information capitalism. Many have argued that there is something institutionally amiss about informational capitalism (e.g. Strandburg, 2013); a recent CfP went so far as to say that the current market for data and AI is not “functional or sustainable.”

As far as I’m concerned, Cohen (2019) is the new gold standard for qualitative analysis of these issues. It is thorough. It is, as far as I can tell, correct. It is a dense and formidable work; I’m not through it yet. So while it may contain all the answers, I haven’t read them yet. This leaves me free to continue to think about how I would go about solving them.

My perspective is this: it will require social scientific progress to crack the right institutional design to settle informational capitalism in a satisfying way. Because computation is really at the heart of the activity of economic institutions, computation will need to be included within the social scientific models in question. But this is not something particularly new; rather, it’s implicitly already how things are done in many “hard” social science disciplines. Epstein (2006) draws the connections between classical game theoretic modeling and agent-based simulation, arguing that “The Computer is not the point”: rather, the point is that the models are defined in terms of mathematical equations, which are by foundational laws of computing amenable to being simulated or solved through computation. Hence, we have already seen a convergence of methods from “AI” into computational economics (Carroll, 2006) and sociology (Castelfranchi, 2001).

This position is entirely consistent with Abebe et al.’s analysis of “roles for computing in social change” (2020). In that paper, the authors are concerned with “social problems of justice and equity”, loosely defined, which can be potentially be addressed through “social change”. They defend the use of technical analysis and modeling as playing a positive role even according to the politics the Fairness, Accountability, and Transparency research community, which are particular. Abebe et al. address backlashes against uses of formalism such as that of Selbst et al. (2019); this rebuttal was necessary given the disciplinary fraughtness of the tech policy discourse.

What I am proposing in this note is something ever so slightly different. First, I am aiming at a different political problematic than the “social problems of justice and equity”. I’m trying to address the economic problems raised by Cohen’s analysis, such as the dysfunctionality of the data market. Second, I’d like to distinguish between “computing” in the method of solving mathematical model equations and “computing” as an element of the object of study, the computational institution (or platform, or data refinery, etc.) Indeed, it is the wonder and power of computation that it is possible to model one computational process within another. This point may be confusing for lawyers and anthropologists, but it should be clear to computational social scientists when we are talking about one or other, though our scientific language has not settled on a lexicon for this yet.

The next step for my own research here is to draw up a mathematical description of informational capitalism, or the stylized facts about it implied by Cohen’s arguments. This is made paradoxically both easier and more difficult by the fact that much of this work has already been done. A simple search of literature on “search costs”, “network effects”, “switching costs”, and so on, brings up a lot of fine work. The economists have not been asleep all this time. But then why has it taken so long for the policy critiques around informational capitalism, including those around informational capitalism and algorithmic opacity, to emerge?

I have two conflicting hypotheses, one quite gloomy and the other exciting. The gloomy view is that I’m simply in the wrong conversation. The correct conversation, the one that has adequately captured the nuances of the data economy already, is elsewhere–maybe in an economics conference in Zurich or something, and this discursive field of lawyers and computer scientists and ethicists is just effectively twiddling its thumbs and working on poorly framed problems because it hasn’t and can’t catch up with the other discourse.

The exciting view is that the problem of synthesizing the fragments of a solution from the various economists literatures with the most insight legal analyses is an unsolved problem ripe for attention.

Edit: It took me a few days, but I’ve found the correct conversation. It is Ross Anderson’s Workshop on Economics and Information Security. That makes perfect sense: Ross Anderson is a brilliant thinker in that arena. Naturally, as one finds, all the major results in this space are 10-20 years old. Quite probably, if I had found this one web page a couple years ago, my dissertation would have been written much differently–not so amateurishly.

It is supremely ironic to me how, in an economy characterized by a reduction in search costs, the search for the answers I’ve been looking for in information economics has been so costly for me.

References

Abebe, R., Barocas, S., Kleinberg, J., Levy, K., Raghavan, M., & Robinson, D. G. (2020, January). Roles for computing in social change. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 252-260).

Castelfranchi, C. (2001). The theory of social functions: challenges for computational social science and multi-agent learning. Cognitive Systems Research2(1), 5-38.

Carroll, C. D. (2006). The method of endogenous gridpoints for solving dynamic stochastic optimization problems. Economics letters91(3), 312-320.

Cohen, J. E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford University Press, USA.

Epstein, Joshua M. Generative social science: Studies in agent-based computational modeling. Princeton University Press, 2006.

Fraser, N. (2017). The end of progressive neoliberalism. Dissent2(1), 2017.

Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019, January). Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 59-68).

Strandburg, K. J. (2013). Free fall: The online market’s consumer preference disconnect. U. Chi. Legal F., 95.