Regulating infoglut?

In the 20’s, many people were attracted for the first time in investing in the stock market. It was a time when fortunes were made and lost, but made more than they were lost, and so on average investors saw large returns. However, the growth in value of stocks was driven in part, and especially in the later half of the decade, by debt. The U.S. Federal Reserve chose to lower interest rates, making it easier to borrow money. When the interest rates on loans were lower than the rates of return on stocks, everybody from households to brokers began to take on debt to reinvest in the stock market. (Brooks, 1999)

After the crash of ’29, which left the economy decimated, there was a reckoning, leading to the Securities Act of 1933 and the Securities Exchange Act of 1934. The latter established the Securities and Exchange Commission (SEC), and established the groundwork for the more trusted financial institutions we have today.

Cohen (2016) writes about a more current economic issue. As the economy changes from being centered on industrial capitalism to informational capitalism, the infrastructural affordances of modern computing and networking have invalidated the background logic of how many regulations are supposed to work. For example, anti-discrimination regulation is designed to prevent decisions from being made based on protected or sensitive attributes of individuals. However, those regulations made most sense when personal information was relatively scarce. Today, when individual activity is highly instrumented by pervasive computing infrastructure, we suffer from infoglut — more information than is good for us, either as individuals or as a society. As a consequence, proxies of protected attributes are readily available for decision-makers and indeed are difficult to weed out of a machine learning system even when market actors fully intend to do so (see Datta et al., 2017). In other words, the structural conditions that enable infoglut erode rights that we took for granted in the absence of today’s network and computing systems.

In an ongoing project with Salome Viljoen, we are examining the parallels between the financial economy and the data economy. These economies are, of course, not fully distinct. However, they are distinguished in part by how they are regulated: the financial economy has over a century of matured regulations defining it and reducing system risks such as those resulting from a debt-financed speculative bubble; the data economy has emerged only recently as a major source of profit with perhaps unforeseen systemic risks.

We have an intuition that we would like to pin down more carefully as we work through these comparisons: that there is something similar about the speculative bubbles that led to the Great Depression and today’s infoglut. In a similar vein to prior work looking that uses regulatory analogy to motivate new thinking about data regulation (Hirsch, 2013; Froomkin, 2015) and professional codes (Stark and Hoffman, 2019), we are interested in how financial regulation may be a precedent for regulation of the data economy.

However, we have reason to believe that the connections between finance and personal data are not merely metaphorical. Indeed, finance is an area with well-developed sectoral privacy laws that guarantee the confidentiality of personal data (Swire, 2003); it is also the case that financial institutions are one of the many ways personal data originating from non-financial contexts is monetized. We do not have to get poetic to see how these assets are connected; they are related as a matter of fact.

What is more elusive, and at this point only a hypothesis, is that there is valid sense in which the systemic risks of infoglut can be conceptually understood using tools similar to those that are used to understand financial risk. Here I maintain an ambition: that systemic risk due to infoglut may be understood using the tools of macroeconomics and hence internalized via technocratic regulatory mechanisms. This would be a departure from Cohen (2016), who gestures more favorably towards “uncertainty” based regulation that does not attempt probabilistic expectation but rather involves tools such as threat modeling, as used in some cybersecurity practices.

References

Brooks, J. (1999). Once in Golconda: A true drama of Wall Street 1920-1938. John Wiley & Sons.

Cohen, J. E. (2016). The regulatory state in the information age. Theoretical Inquiries in Law17(2), 369-414.

Datta, A., Fredrikson, M., Ko, G., Mardziel, P., & Sen, S. (2017, October). Use privacy in data-driven systems: Theory and experiments with machine learnt programs. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (pp. 1193-1210).

Froomkin, A. M. (2015). Regulating Mass Surveillance as Privacy Pollution: Learning from Environmental Impact Statements. U. Ill. L. Rev., 1713.

Hirsch, D. D. (2013). The glass house effect: Big Data, the new oil, and the power of analogy. Me. L. Rev.66, 373.

Stark, L., & Hoffmann, A. L. (2019). Data is the new what? Popular metaphors & professional ethics in emerging data culture.

Swire, P. P. (2003). Efficient confidentiality for privacy, security, and confidential business information. Brookings-Wharton Papers on Financial Services2003(1), 273-310.

Surden, H. (2007). Structural rights in privacy. SMUL Rev.60, 1605.