Digifesto

Tag: data economics

Regulating infoglut?

In the 20’s, many people were attracted for the first time in investing in the stock market. It was a time when fortunes were made and lost, but made more than they were lost, and so on average investors saw large returns. However, the growth in value of stocks was driven in part, and especially in the later half of the decade, by debt. The U.S. Federal Reserve chose to lower interest rates, making it easier to borrow money. When the interest rates on loans were lower than the rates of return on stocks, everybody from households to brokers began to take on debt to reinvest in the stock market. (Brooks, 1999)

After the crash of ’29, which left the economy decimated, there was a reckoning, leading to the Securities Act of 1933 and the Securities Exchange Act of 1934. The latter established the Securities and Exchange Commission (SEC), and established the groundwork for the more trusted financial institutions we have today.

Cohen (2016) writes about a more current economic issue. As the economy changes from being centered on industrial capitalism to informational capitalism, the infrastructural affordances of modern computing and networking have invalidated the background logic of how many regulations are supposed to work. For example, anti-discrimination regulation is designed to prevent decisions from being made based on protected or sensitive attributes of individuals. However, those regulations made most sense when personal information was relatively scarce. Today, when individual activity is highly instrumented by pervasive computing infrastructure, we suffer from infoglut — more information than is good for us, either as individuals or as a society. As a consequence, proxies of protected attributes are readily available for decision-makers and indeed are difficult to weed out of a machine learning system even when market actors fully intend to do so (see Datta et al., 2017). In other words, the structural conditions that enable infoglut erode rights that we took for granted in the absence of today’s network and computing systems.

In an ongoing project with Salome Viljoen, we are examining the parallels between the financial economy and the data economy. These economies are, of course, not fully distinct. However, they are distinguished in part by how they are regulated: the financial economy has over a century of matured regulations defining it and reducing system risks such as those resulting from a debt-financed speculative bubble; the data economy has emerged only recently as a major source of profit with perhaps unforeseen systemic risks.

We have an intuition that we would like to pin down more carefully as we work through these comparisons: that there is something similar about the speculative bubbles that led to the Great Depression and today’s infoglut. In a similar vein to prior work looking that uses regulatory analogy to motivate new thinking about data regulation (Hirsch, 2013; Froomkin, 2015) and professional codes (Stark and Hoffman, 2019), we are interested in how financial regulation may be a precedent for regulation of the data economy.

However, we have reason to believe that the connections between finance and personal data are not merely metaphorical. Indeed, finance is an area with well-developed sectoral privacy laws that guarantee the confidentiality of personal data (Swire, 2003); it is also the case that financial institutions are one of the many ways personal data originating from non-financial contexts is monetized. We do not have to get poetic to see how these assets are connected; they are related as a matter of fact.

What is more elusive, and at this point only a hypothesis, is that there is valid sense in which the systemic risks of infoglut can be conceptually understood using tools similar to those that are used to understand financial risk. Here I maintain an ambition: that systemic risk due to infoglut may be understood using the tools of macroeconomics and hence internalized via technocratic regulatory mechanisms. This would be a departure from Cohen (2016), who gestures more favorably towards “uncertainty” based regulation that does not attempt probabilistic expectation but rather involves tools such as threat modeling, as used in some cybersecurity practices.

References

Brooks, J. (1999). Once in Golconda: A true drama of Wall Street 1920-1938. John Wiley & Sons.

Cohen, J. E. (2016). The regulatory state in the information age. Theoretical Inquiries in Law17(2), 369-414.

Datta, A., Fredrikson, M., Ko, G., Mardziel, P., & Sen, S. (2017, October). Use privacy in data-driven systems: Theory and experiments with machine learnt programs. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (pp. 1193-1210).

Froomkin, A. M. (2015). Regulating Mass Surveillance as Privacy Pollution: Learning from Environmental Impact Statements. U. Ill. L. Rev., 1713.

Hirsch, D. D. (2013). The glass house effect: Big Data, the new oil, and the power of analogy. Me. L. Rev.66, 373.

Stark, L., & Hoffmann, A. L. (2019). Data is the new what? Popular metaphors & professional ethics in emerging data culture.

Swire, P. P. (2003). Efficient confidentiality for privacy, security, and confidential business information. Brookings-Wharton Papers on Financial Services2003(1), 273-310.

Surden, H. (2007). Structural rights in privacy. SMUL Rev.60, 1605.

For a more ethical Silicon Valley, we need a wiser economics of data

Kara Swisher’s NYT op-ed about the dubious ethics of Silicon Valley and Nitasha Tiku’s WIRED article reviewing books with alternative (and perhaps more cynical than otherwise stated) stories about the rise of Silicon Valley has generated discussion and buzz among the tech commentariat.

One point of debate is whether the focus should be on “ethics” or on something more substantively defined, such as human rights. Another point is whether the emphasis should be on “ethics” or on something more substantively enforced, like laws which impose penalties between 1% and 4% of profits, referring of course to the GDPR.

While I’m sympathetic to the European approach (laws enforcing human rights with real teeth), I think there is something naive about it. We have not yet seen whether it’s ever really possible to comply with the GDPR could wind up being a kind of heavy tax on Big Tech companies operating in the EU, but one that doesn’t truly wind up changing how people’s data are used. In any case, the broad principles of European privacy are based on individual human dignity, and so they do not take into account the ways that corporations are social structures, i.e. sociotechnical organizations that transcend individual people. The European regulations address the problem of individual privacy while leaving mystified the question of why the current corporate organization of the world’s personal information is what it is. This sets up the fight over ‘technology ethics’ to be a political conflict between different kinds of actors whose positions are defined as much by their social habitus as by their intellectual reasons.

My own (unpopular!) view is that the solution to our problems of technology ethics are going to have to rely on a better adapted technology economics. We often forget today that economics was originally a branch of moral philosophy. Adam Smith wrote The Theory of Moral Sentiments (1759) before An Inquiry into the Nature and Causes of the Wealth of Nations (1776). Since then the main purpose of economics has been to intellectually grasp the major changes to society due to production, trade, markets, and so on in order to better steer policy and business strategy towards more fruitful equilibria. The discipline has a bad reputation among many “critical” scholars due to its role in supporting neoliberal ideology and policies, but it must be noted that this ideology and policy work is not entirely cynical; it was a successful centrist hegemony for some time. Now that it is under threat, partly due to the successes of the big tech companies that benefited under its regime, it’s worth considering what new lessons we have to learn to steer the economy in an improved direction.

The difference between an economic approach to the problems of the tech economy and either an ‘ethics’ or a ‘law’ based approach is that it inherently acknowledges that there are a wide variety of strategic actors co-creating social outcomes. Individual “ethics” will not be able to settle the outcomes of the economy because the outcomes depend on collective and uncoordinated actions. A fundamentally decent person may still do harm to others due to their own bounded rationality; “the road to hell is paved with good intentions”. Meanwhile, regulatory law is not the same as command; it is at best a way of setting the rules of a game that will be played, faithfully or not, by many others. Putting regulations in place without a good sense of how the game will play out differently because of them is just as irresponsible as implementing a sweeping business practice without thinking through the results, if not more so because the relationship between the state and citizens is coercive, not voluntary as the relationship between businesses and customers is.

Perhaps the biggest obstacle to shifting the debate about technology ethics to one about technology economics is that it requires a change in register. It drains the conversation of the pathos which is so instrumental in surfacing it as an important political topic. Sound analysis often ruins parties like this. Nevertheless, it must be done if we are to progress towards a more just solution to the crises technology gives us today.

“Context, Causality, and Information Flow: Implications for Privacy Engineering, Security, and Data Economics” <– My dissertation

In the last two weeks, I’ve completed, presented, and filed my dissertation, and commenced as a doctor of philosophy. In a word, I’ve PhinisheD!

The title of my dissertation is attention-grabbing, inviting, provocative, and impressive:

“Context, Causality, and Information Flow: Implications for Privacy Engineering, Security, and Data Economics”

If you’re reading this, you are probably wondering, “How can I drop everything and start reading that hot dissertation right now?”

Look no further: here is a link to the PDF.

You can also check out this slide deck from my “defense”. It covers the highlights.

I’ll be blogging about this material as I break it out into more digestible forms over time. For now, I’m obviously honored by any interest anybody takes in this work and happy to answer questions about it.