Digifesto

Social justice, rationalism, AI ethics, and libertarianism

There has been a lot of drama on the Internet about AI ethics, and that drama is in the mainstream news. A lot of this drama is about the shakeup in Google’s AI Ethics team. I’ve already written a little about this. I’ve been following the story since, and I would adjust my emphasis slightly. I think the situation is sadder than I did at first. But my take (and I’m sticking to it) is that what’s missing from the dominant narrative in that story, that it is about race and gender representation in tech, (a view quite well articulated in academic argot by Tao and Varshney (2021), by the way) is an analysis of the corporate firm as an organization, what it means to be an employee of a firm, and the relationship between firms and artificial intelligence.

These questions about the corporate firm are not topics that get a lot of traction on the Internet. The topics that get a lot of traction on the Internet are race and gender. I gather that this is something anybody with a blog, (let alone publishing news), discovers. I write this blog for an imagined audience of about ten people. It is mainly research notes to myself. But for some reason, this post about racism got 125 views this past week. For me, that’s a lot of views. Who are those people?

The other dramatic thing going in my corner of the Internet this past week is also about race and gender, and also maybe AI ethics. It is the reaction to Cade Metz’s NYT article “Silicon Valley’s Safe Space“, which is about the blog Slate Star Codex (SSC), rationalist subculture, libertarianism and … racism and sexism.

I learned about this kerfuffle in a backwards way. I learned about it through Glen Weyl’s engagement with SSC about technocracy, which I suppose he was bumping to ride on the shockwave created by the NYT article. In 2019, Weyl posted a critique of “technocracy” which was also a rather pointed attack on the rationalism community, in part because of its connection to “neoreaction”. SSC responded rather adroitly, in my opinion; Weyl responded. It’s an interesting exchange about political theory.

I follow Weyl on Twitter because I disagree with him rather strongly on niche topics in data regulation. As a researcher, I’m all about these niche topics in data regulation; I think they are where the rubber really hits the road on AI ethics. Weyl has published the view that consumer internet data should be treated as a form of labor. In my corner of technology policy research, which is quite small in terms of its Internet footprint, we think this is nonsense; it is what Salome Viljoen calls a propertarian view of data. The propertarian view of data is, for many important reasons, wrong. “Markets are never going to fix the AI/data regulation problem. Go read some Katharina Pistor, for christ’s sake!” That’s the main thing I’ve been waiting to say to Weyl, who has a coveted Microsoft Research position, a successful marketing strategy for his intellectual brand, perfect institutional pedigree, and so on.

Which I mean to his credit: he’s a successful public intellectual. This is why I was surprised to see him tweeting about rationalist subculture, which is not, to me, a legitimate intellectual topic, sanctioned as part of the AI/tech policy/whatever research track. My experiences with rationalism have all been quite personal and para-academic. It is therefore something of a context collapse — the social media induced bridging of social spheres — for me personally. (Marwick and Boyd, 2011; Davis and Jurgenson, 2014)


Context collapse is what it’s all about, isn’t it? One of the controversies around the NYT piece about SSC was that the NYT reporter, Metz, was going to reveal the pseudonymous author of SSC, “Scott Alexander”, as the real person Scott Seskind. This was a “doxxing”, though the connection was easy to make for anybody looking for it. Nevertheless, Seskind initially had a strong personal sense of his own anonymity as a writer. Seskind is, professionally, a psychotherapist, which is a profession with very strong norms around confidentiality and its therapeutic importance. Metz’s article is about this. It is also about race and gender — in particular the ways in which there is a therapeutic need for a space in which to discuss race and gender, as well as other topics, seriously, but also freely, without the major social and professional consequences that have come to be associated with it.

Smart people are writing about this and despite being a “privacy scholar”, I’m not sure I can say much that is smarter. Will Wilkinson’s piece is especially illuminating. His is a defense of journalism and a condemnation of what, I’ve now read, was an angry Internet mob that went after Metz in response to this SSC doxxing. It is also an unpacking of Seskind’s motivations based on his writing, a diagnosis of sorts. Spiers has a related analysis. A theme of these arguments against SSC and the rationalists is, “You weren’t acting so rationally this time, were you?”

I get it. The rationalists are, by presenting themselves at times as smarter-than-thou, asking for this treatment. Certainly in elite intellectual circles, the idea that participation in a web forum should give you advanced powers of reason that are as close as we will ever get to magic is pretty laughable. What I think I can say from personal experience, though, is that elite intellectuals seriously misunderstand popular rationalism if they think that it’s about them. Rather, popular rationalism is a non-elite social movement that’s often for people with non-elite backgrounds and problems. Their “smarter-than-thou” was originally really directed at other non-elite cultural forms, such as Christianity (!). I think this is widely missed.

I say this with some anecdata. I lived in Berkeley for a while in graduate school and was close with some people who were bona fide rationalists. I had an undergraduate background from an Ivy League university in cognitive science and was familiar with the heuristics and biases program and Bayesian reasoning. I had a professional background working in software. I should have fit in, right? So I volunteered twice at the local workshop put on by the Center for Applied Rationality to see what it was all about.

I noticed, as one does, that it’s mostly white men at these workshops and when asked for feedback I pointed this out. Eager to make use of the sociological skills I was learning in grad school, I pointed out that if they did not bring in more diversity early on, then because of homophily effects they may never reach a diverse audience.

At the time, the leadership of CFAR told me something quite interesting. It was that they had looked at their organizational goals and capacities and decided that where they could make the most impact was on teaching the skills of rational thought to smart people from, say, the rural midwest U.S.A. who would otherwise not get the exposure to this kind of thinking or what I would call a community of practice around it. Many of these people (much like Elizabeth Spiers, according to her piece) come from conservative and cloistered Christian backgrounds. Yudkowsky’s Harry Potter and the Methods of Rationality is their first exposure to Bayesian reasoning. They are often the best math students in their homogeneous home town, and finding their way into an engineering job in California is a big deal, as is finding a community that fills an analogous role to organized religion but does not seem so intellectually backwards. I don’t think it’s accidental the Julia Galef, who founded CFAR, started out in intellectual atheist circles before becoming a leader in rationalism. Providing an alternative culture to Christianity is largely what popular rationalism is about.

From this perspective, it makes more sense why Seskind has been able to cultivate a following by discussing cultural issues from a centrist and “altruistic” perspective. There’s a population in the U.S. that grew up in conservative Christian settings, now makes a living in a booming technology sector whose intellectual principles are at odds with those of their upbringing, is trying to “do the right thing” and, being detached from political institutions or power, turns to the question of philanthropy, codified into Effective Altruism. This population is largely comprised of white guys who may truly be upwardly mobile because they are, relative to where they came from, good at math. The world they live in, which revolves around AI, is nothing like the one they grew up in. These same people are regularly confronted by a different ideology, a form of left wing progressivism, which denies their merit, resents their success, and considers them a problem, responsible for the very AI harms that they themselves are committed to solving. If I were one of them, I, too, would want to be part of a therapeutic community where I could speak freely about what was going on.


This is several degrees removed from libertarian politics, which I now see as the line connecting in Weyl. Wilkinson makes the compelling case for contemporary rationalism originating in Tyler Cohen’s libertarian economist blogging and the intellectual environment at George Mason University. This spins out Robin Hanson’s Overcoming Bias blog, which spins out Yudkowsky’s LessWrong forum, which is where popular rationalism incubated. Weyl is an east coast libertarian public intellectual and it makes sense that he would engage other libertarian public intellectuals. I don’t think he’s going to get very far picking fights on the Internet with Yudkowsky, but I could be wrong.

Weyl’s engagement with the rationalist community does highlight for me two other missing elements in the story-as-told-so-far, from my readings on it. I’ve been telling a story partly about geography and migration. I think there’s also an element of shifting centers of cultural dominance. Nothing made me realize that I am a parochial New Yorker like living in California for five years. Rationalism remains weird to me because it is, today, a connection between Oxford utilitarian philosophers, the Silicon Valley nouveau riche, and to some extent Washington, D.C.-based libertarians. That is a wave of culture bypassing the historical intellectual centers of northeast U.S. Ivy League universities which for much of America’s history dominated U.S. politics.

To some extent, this speaks to the significance of the NYT story as well. It was not the first popular article about rationalists; Metz mentions the TechCrunch article about neoreactionaries (I’ll get to that) but not the Sam Frank article in Harpers, “Come With Us If You Want To Live” (2015), which is more ethnographic in its approach. I think it’s a better article. But the NYT has a different audience and a different standard for relevance. NYT is not an intellectual literary magazine. It is the voice of New York City, once the Center of the Universe. New York City’s perspective is particularly weighty, relevant, objective, powerful because of its historic role as a global financial center and marketing center. When the NYT notices something, for a great many people, it becomes real. NYT is at the center of a large public sphere with a specific geographic locus, in a way that some blogs and web forums are not. So whether it was justified or not, Metz’s doxing of Seskind was a significant shift in what information was public, and to whom. Part of its significance is that it was an assertion of cultural power by an institution tied to old money in New York City over a beloved institution of new money in Silicon Valley. In Bourdieusian terms, the article shifted around social and cultural capital in a big way. Seskind was forced to make a trade by an institution more powerful than him. There is a violence to that.


This force of institutional power is perhaps the other missing element in this story. Wilkinson and Frank’s pieces remind me: this is about libertarianism. Weyl’s piece against technocracy is also about libertarianism, or maybe just liberalism. Weyl is arguing that rationalists, as he understands them, are libertarians but not liberals. A “technocrat” is somebody who wants to replace democratic governance mechanisms, which depend on pluralistic discourse, with an expert-designed mechanism. Isn’t this what Silicon Valley does? Build Facebook and act like it’s a nation? Weyl, in my reading, wants an engaged pluralistic public sphere. He is, he reveals later, really arguing with himself, reforming his own views. He was an economist, coming up with mathematical mechanisms to improve social systems through “radical exchange”; now he is a public intellectual who has taken a cultural turn and called AI an “ideology”.

On the other end of the spectrum, there are people that actually would, if they could, build an artificial island and rule it via computers like little lords. I guess Peter Thiel, who plays a somewhat arch-villain role in this story, is like this. Thiel does not like elite higher education and the way it reproduces the ideological conditions for a pluralistic democracy. This is presumably why he backs Curtis Yarvin, the “neoreactionary” writer and “Dark Enlightenment” thinker. Metz goes into detail about this, and traces a connection between Yarvin and SSC; there’s leaked emails about it. To some people, this is the real story. Why? Because neoreaction is racist and sexist. This, not political theory, I promise you, is what is driving the traffic. It’s amazing Metz didn’t use the phrase “red pill” or “alt-right” because that’s definitely the narrative being extended here. With Trump out of office and Amazon shutting down Parler’s cloud computing, we don’t need to worry about the QAnon nutcases (which were, if I’m following correctly, a creation of the Mercers) but what about the right wing elements in the globally powerful tech sector, because… AI ethics! There’s no escape.

Slate Star Codex was a window into the Silicon Valley psyche. There are good reasons to try and understand that psyche, because the decisions made by tech companies and the people who run them eventually affect millions.

And Silicon Valley, a community of iconoclasts, is struggling to decide what’s off limits for all of us.

At Twitter and Facebook, leaders were reluctant to remove words from their platforms — even when those words were untrue or could lead to violence. At some A.I. labs, they release products — including facial recognition systems, digital assistants and chatbots — even while knowing they can be biased against women and people of color, and sometimes spew hateful speech.

Why hold anything back? That was often the answer a Rationalist would arrive at.

Metz’s article has come under a lot of criticism for drawing sweeping thematic links between SSC, neoreaction, and Silicon Valley with very little evidence. Noah Smith’s analysis shows how weak this connection actually is. Silicon Valley is, by the numbers, mostly left-wing, and mostly, by the numbers, not reading rationalist blogs. Thiel, and maybe Musk, are noteworthy exceptions, not the general trend. What does any of this have to do with, say, Zuckerburg? Not much.

The trouble is that if the people in Silicon Valley are left-wing, then there’s nobody to blame for racist and sexist AI. Where could racism and sexism in AI possibly come from, if not some collective “psyche” of the technologists? Better, more progressive leaders in Silicon Valley, the logic goes, would lead to better social outcomes. Pluralistic liberalism and proper demographic representation would, if not for the likes of bad apples like Thiel, steer the AI Labs and the big tech companies that use their products towards equitability and justice.

I want to be clear: I think that affirmative action for under-represented minorities (URMs) in the tech sector is a wonderful thing, and that improving corporate practices around their mentorship, etc. is a cause worth fighting for. I’m not knocking any of that. But I think the idea that this alone will solve the problems of “AI ethics” is a liberal or libertarian fantasy. This is because assuming that the actions of a corporation will have the politics of its employees is a form of ecological fallacy. Corporations do not work for their employees; they work, legally and out of fiduciary duty, to their shareholders. And the AI operated at the grand social scales that we are talking about are not controlled by any one person; they are created and operated corporately.

In my view, what Weyl (who I probably agree with more than I don’t), the earlier libertarian bloggers like Hanson, the AI X-risk folks like Yudkowsky, and the popular rationalist movement all get wrong is the way institutional power necessarily exceeds that of individuals in part because and through of “artificial intelligence”, but also through older institutions that distribute economic and social capital. The “public sphere” is not a flat or radical “marketplace of ideas”; it is an ecology of institutions like the New York Times, playing on ideological receptiveness grounded in religious and economic habitus.


My favorite libertarian theorist is Jeffrey Friedman, editor of the Critical Review. Friedman is a dedicated intellectual and educator, who for years has been a generous intellectual mentor and facilitator libertarian political thought. In an early intellectual encounter that was very formative for me, he invited me to write a book review of Philip Tetlock Expert Political Judgment for his journal. In 2007, it was my first academic publication. The writing is embarrassing and I’m glad it is behind a paywall. In the article, I argue against Friedman and for technocracy based on the use of artificial intelligence. I have been in friendly disagreement on this point with Friedman ever since.

Friedman’s book, Power without Knowledge: A Critique of Technocracy (2019), is his humble magnum opus. When I told him I wanted to write about the book here as part of a response to the discussion between Weyl and SSC, he told me he didn’t want Internet libertarians to know about his book; it was for academic libertarians only. I assured him that this blog, with its ten readers, is too obscure to impact the privacy he desires for this publication.

The argument in Power without Knowledge is that technocracy is infeasible because of the unpredictability of human beings, who are free and unpredictable because of their vastly complex exposure to and thinking about ideas. Free thought begets free action, and any would-be technocrat faces an impossible task of mapping the landscape of these freedoms. A controlling government, such as those documented in Scott’s Seeing Like a State, will almost always make things worse. Friedman advocates instead for an “Exitocracy”, based on the idea of Exit from Hirschman, in which citizens can move freely between experimental mini-states and learn, via societal evolution, where they would rather be. The attractiveness of this model is that it depends on minimal assumptions about the rationality of agents but still achieves satisficing results. The problem, for Friedman, is not the excess of intelligence or controlling power, but the absence of it.

At this time, I continue to disagree with Friedman. We are, by and large, not free to think whatever we choose. Academics are particularly free in this respect, especially if they do not value prestige or fame as much as their other colleagues. But most people are products of their background, or their habits, or their employers, or their social circles, or their socially structured lived experience. Institutions can predict and control people, largely by offering economic incentives. We are not so free. Or have I gotten this wrong?

References

Davis, J. L., & Jurgenson, N. (2014). Context collapse: Theorizing context collusions and collisions. Information, communication & society17(4), 476-485.

Friedman, J. (2019). Power without knowledge: a critique of technocracy. Oxford University Press.

Kollman, K., Miller, J. H., & Page, S. E. (1997). Political institutions and sorting in a Tiebout model. The American Economic Review, 977-992.

Marwick, A. E., & Boyd, D. (2011). I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New media & society13(1), 114-133.

Pistor, K. (2020). Rule by data: The end of markets?. Law & Contemp. Probs.83, 101.

Tao, Y., & Varshney, K. R. (2021). Insiders and Outsiders in Research on Machine Learning and Society. arXiv preprint arXiv:2102.02279.

Crypto, macro, and information law

Dogecoin is in the news this week because of Elon Musk’s pump and dump in the latest of notable asset bubbles fueled in small part by Internet-informed, perhaps frivolous, day-traders. The phenomenon reminds me of this curious essay about viral art. It concludes:

The doge meme is a Goldsmithian piece, passing ephemerally through a network of peers. In a LaBeoufian moment, Jackson Palmer invented Dogecoin, capturing the meme and using it to leverage networks of power. Now it is art post-LaBeouf in its greatest form: authorless art as economic power, transmitted over networks. As the synthesized culmination of the traditions of economics and Western literature, DogeCoin is one of the greatest achievements in the history of art, if not the greatest.

This paragraph is perhaps best understood, if at all, as an abstruse joke. The essay itself is most likely not written by “Niklos Szabo”, easily conflated with Nick Szabo, one of the deeper thinkers behind cryptocurrency more generally. The real Szabo has written much more seriously and presciently about culture and the economy. As an aside, I believe Szabo’s writings about book consciousness prefigure Hildebrandt’s work on the role of the printing press as a medium contributing to the particular character of text-driven law. However if the enduring success of cryptocurrencies validates Szabo’s economics more than his cultural theory. His 2002 paper “Shelling out: the origins of money” is a compelling history of currency. Notably, it is not a work of formal economic theory. Rather, it draws on historical and anthropological examples to get at the fundamentals of the role currency plays in society. This study leads to the conclusion that currency must be costly to created and transferable with relatively low transaction costs. Bitcoin, for example, was designed to have these qualities.

What Szabo does not discuss in “Shelling out” is the other thing Bitcoin is most known for, which is speculative asset bubble pricing. Cryptocurrency has lurched into the mainstream in fits of speculative enthusiasm followed by crashes and breakdowns. It is risky.

Salome Viljoen and I are writing about financial regulations as part of our “Data Market Discipline” project. One takeaway from this work is that the major financial regulations in the United States were responses to devastating financial crises, such as the Great Depression and the 2008 financial crisis, which were triggered by the collapse of an asset bubble. So while currency is an old invention and the invention of new currencies is interesting, the project of maintaining a stable financial system is a relatively more recent legal project and an unfinished one at that. It is so much more unfinished for cryptocurrencies, that are not controlled by a central banking system, than for national fiat currencies for which, for example, interest rates can be used as a calibrating tool.

These are not idle theoretical points. Rather, they are at the heart of questions surrounding the recovery of the economy from COVID-related setbacks. Money from stimulus checks going to people who have no reason to increase their consumption (cf. Carroll et al., 2020) is perhaps responsible for the influx of retail investment into equities markets and, in particular, Reddit-coordinated asset bubbles such as the ones we’re seeing recently with Gamestop and Dogecoin. The next stimulus package being prepared by the Biden administration has sounded alarms from parts of the Economics establishment that it will spur inflation, while Janet Yellen has argued that this outcome can be prevented using standard monetary policy tools such as the increase of interest rates. Arguably, the recent rising price of Bitcoin is due to this threat of macro-economic stability of the dollar-denominated financial system.

I don’t mean any of this conclusively. Rather, I’m writing this to register my growing realization that the myriad Internet effects on culture, economy, and the law are often much more motivated by movements in internationally coupled financial systems than “technology policy” specialists or “public interest technologists” are inclined to admit. We are inclined, because of our training in something else — whether it be computer science, environmental law, political philosophy, or whatever — to seek out metaphors from our own domain of expertise. But many of the most trenchant analyses of why the current technological landscape seems a bit off come down to failures of the price mechanism in the digital economy. I’m thinking of Kapczynski ‘s (2011) critique of the price mechanism in relation to intellectual property, and Strandburg’s (2013) analysis of the failure of pricing in on-line services. We have on the one hand the increasingly misconceptualized “Silicon Valley”‘s commitment to a “free market” and on the other hand few of the conditions under which a “free market” is classically considered to be efficient. The data economy does not meet even classically liberal (let alone New, more egalitarian, Liberal) standards of justice. And liberal legal theory is not equipped, Jake Goldenfein and I have argued, to grapple with this reality.

What progress can be made?

Maybe there is something somebody with enormous wealth or institutional power could do to change the situation. I’m not one of those people. However, there is some evidence to support the point that at the root of these problems is a conceptual, intellectual failing to understand what’s going on at the root of it.

In some recent work with Kathy Strandburg, we are examining the conceptual roots of the highly influential Law and Economics (L&E) branch of legal scholarship. This field absorbs the techniques of neoclassical economics and develops them into actionable policy proposals and legal rules of thumb. It has come under political criticism from the recently formed Law and Political Economy (LPE) movement. Interestingly, it has also been critique from a “Law and Macroeconomics” perspective, which argues that L&E should really be called “law and microeconomics”, because of its inability to internalize macroeconomic concepts such as the business cycle or changes in monetary policy.

Among the assumptions at the roots of L&E are notions of optimality and efficiency that make somewhat naive assumptions about the nature of price and money. For example, Kaldor-Hicks efficiency, a relaxation of Pareto efficiency used in welfare economics as applied to L&E, allows for transactions that alter the situations of agents so long as one agent, who gains, could theoretically compensate the other for their losses (see Feldman, 1998), . This concept is used to consider social welfare optimal, resolving the neoclassical problem of the incomparability of individual utilities through an implicit pricing mechanism. This leads L&E to favor “wealth maximizing” policies.

However, grounding legal theory in the idea of a robust price mechanism capable of subsuming all differences in individual preferences is quite naive in a digital economy that is always already at the intersection of many different currencies (including cryptocurrency), variable and politically vulnerable systems of credit and debt, and characterized by markets that do not that the legal scaffolding needed to drive them towards “true” prices. If Mirowski and Nik-Khah (2017) are correct and Economists have abandoned earlier notions of “truth” to faith in the market’s price as a “truth” derived from streams of information, something is indeed amiss. Data is not a commodity, and regulations that treat data flows as commodity exchanges not well matched to the reality. In the Hayekian model, price is the signal that combines available information. In the data economy, the complexity topology of real data flows belies simplistic views of “the market”.

What tech law needs is a new economic model, one that, just as general relativity in physics showed how classical mechanics was a special case of more complex universal laws, reveals how data, intellectual property, and price are connected in ways that go beyond the classical liberal imagination.

References

Benthall, Sebastian and Viljoen, Salome, Data Market Discipline: From Financial Regulation to Data Governance (January 27, 2021). J. Int’l & Comparative Law – (2021)

Carroll, C. D., Crawley, E., Slacalek, J., & White, M. N. (2020). Modeling the consumption response to the CARES Act (No. w27876). National Bureau of Economic Research.

Feldman, A. M. (1998). Kaldor-hicks compensation. The new Palgrave Dictionary of economics and the law2, 417-421.

Hildebrandt, M. (2015). Smart technologies and the end (s) of law: novel entanglements of law and technology. Edward Elgar Publishing.

Kapczynski, A. (2011). The cost of price: Why and how to get beyond intellectual property internalism. UCLA L. Rev.59, 970.

Mirowski, P., & Nik-Khah, E. (2017). The knowledge we have lost in information: the history of information in modern economics. Oxford University Press.

Shekman, David. “Gamestop and the Surrounding Legal Questions.” Medium, Medium, 5 Feb. 2021, medium.com/@shekman27/gamestop-and-the-surrounding-legal-questions-fc0d1dc142d7.

Strandburg, K. J. (2013). Free fall: The online market’s consumer preference disconnect. U. Chi. Legal F., 95.

Szabo, N. (2002). Shelling out: the origins of money. Satoshi Nakamoto Institute.

Szabo, Niklos. “Art Post-LaBeouf.” Medium, Medium, 22 Sept. 2014, medium.com/@niklosszabo/art-post-labeouf-b7de5732020c.

Hildebrandt (2013) on double contingency in Parsons and Luhmann

I’ve tried to piece together double contingency before, and am finding myself re-encountering these ideas in several projects. I just now happened on this very succinct account of double contingency in Hildebrandt (2013), which I wanted to reproduce here.

Parsons was less interested in personal identity than in the construction of social institutions as proxies for the coordination of human interaction. His point is that the uncertainty that is inherent in the double contingency requires the emergence of social structures that develop a certain autonomy and provide a more stable object for the coordination of human interaction. The circularity that comes with the double contingency is thus resolved in the consensus that is consolidated in sociological institutions that are typical for a particular culture. Consensus on the norms and values that regulate human interaction is Parsons’s solution to the problem of double contingency, and thus explains the existence of social institutions. As could be expected, Parsons’s focus on consensus and his urge to resolve the contingency have been criticized for its ‘past-oriented, objectivist and reified concept of culture’, and for its implicitly negative understanding of the double contingency.

This paragraph says a lot, both about “the problem” posed by “the double contingency”, the possibility of solution through consensus around norms and values, and the rejection of Parsons. It is striking that in the first pages of this article, Hildebrandt begins by challenging “contextual integrity” as a paradigm for privacy (a nod, if not a direct reference, to Nissenbaum (2009)), astutely pointing out that this paradigm makes privacy a matter of delinking data so that it is not reused across contexts. Nissenbaum’s contextual integrity theory depends rather critically on consensus around norms and values; the appropriateness of information norms is a feature of sociological institutions accountable ultimately to shared values. The aim of Parsons, and to some extent also Nissenbaum, is to remove the contingency by establishing reliable institutions.

The criticism of Parsons as being ‘past-oriented, objectivist and reified’ is striking. It opens the question whether Parsons’s concept of culture is too past-oriented, or if some cultures, more than others, may be more past-oriented, rigid, or reified. Consider a continuum of sociological institutions ranging from the rigid, formal, bureaucratized, and traditional to the flexible, casual, improvisational, and innovative. One extreme of these cultures is better conceptualized as “past-oriented” than the other. Furthermore, when cultural evolution becomes embedded in infrastructure, no doubt that culture is more “reified” not just conceptually, but actually, via its transformation into durable and material form. That Hildebrandt offers this criticism of Parsons perhaps foreshadows her later work about the problems of smart information communication infrastructure (Hildebrandt, 2015). Smart infrastructure poses, to those which this orientation, a problem in that it reduces double contingency by being, in fact, a reification of sociological institutions.

“Reification” is a pejorative word in sociology. It refers to a kind of ideological category error with unfortunate social consequences. The more positive view of this kind of durable, even material, culture would be found in Habermas, who would locate legitimacy precisely in the process of consensus. For Habermas, the ideals of legitimate consensus through discursively rational communicative actions finds its imperfect realization in the sociological institution of deliberative democratic law. This is the intellectual inheritor of Kant’s ideal of “perpetual peace”. It is, like the European Union, supposed to be a good thing.

So what about Brexit, so to speak?

Double contingency returns with a vengeance in Luhmann, who famously “debated” Habermas (a more true follower of Parsons), and probably won that debate. Hildebrandt (2013) discusses:

A more productive understanding of double contingency may come from Luhmann (1995), who takes a broader view of contingency; instead of merely defining it in terms of dependency he points to the different options open to subjects who can never be sure how their actions will be interpreted. The uncertainty presents not merely a problem but also a chance; not merely a constraint but also a measure of freedom. The freedom to act meaningfully is constraint [sic] by earlier interactions, because they indicate how one’s actions have been interpreted in the past and thus may be interpreted in the future. Earlier interactions weave into Luhmann’s (1995) emergent social systems, gaining a measure of autonomy — or resistance — with regard to individual participants. Ultimately, however, social systems are still rooted in double contingency of face-to-face communication. The constraints presented by earlier interactions and their uptake in a social system can be rejected and renegotiated in the process of anticipation. By figuring out how one’s actions are mapped by the other, or by social systems in which one participates, room is created to falsify expectations and to disrupt anticipations. This will not necessarily breed anomy, chaos or anarchy, but may instead provide spaces for contestation, self-definition in defiance of labels provided by the expectations of others, and the beginnings of novel or transformed social institutions. As such, the uncertainty inherent in the double contingency defines human autonomy and human identity as relational and even ephemeral, always requiring vigilance and creative invention in the face of unexpected or unreasonably constraining expectations.

Whereas Nissenbaum’s theory of privacy is “admitted conservative”, Hildebrandt’s is grounded in a defense of freedom, invention, and transformation. If either Nissenbaum or Hildebrandt were more inclined to contest each other directly, this may be privacy scholarship’s equivalent of the Habermas/Luhmann debate. However, this is unlikely to occur because the two scholars operate in different legal systems, reducing the stakes of the debate.

We must assume that Hildebrandt, in 2013, would have approved of Brexit, the ultimate defiance of labels and expectations against a Habermasian bureaucratic consensus. Perhaps she also, as would be consistent with this view, has misgivings about the extraterritorial enforcement of the GDPR. Or maybe she would prefer a a global bureaucratic consensus that agreed with Luhmann; but this is a contradiction. This psychologistic speculation is no doubt unproductive.

What is more productive is the pursuit of a synthesis between these poles. As a liberal society, we would like our allocation of autonomy; we often find ourselves in tension with the the bureaucratic systems that, according to rough consensus and running code, are designed to deliver to us our measure of autonomy. Those that overstep their allocation of autonomy, such as those that participated in the most recent Capitol insurrection, are put in prison. Freedom cooexists with law and even order in sometimes uncomfortable ways. There are contests; they are often ugly at the time however much they are glorified retrospectively by their winners as a form of past-oriented validation of the status quo.

References

Hildebrandt, M. (2013). Profile transparency by design?: Re-enabling double contingency. Privacy, due process and the computational turn: The philosophy of law meets the philosophy of technology, 221-46.

Hildebrandt, M. (2015). Smart technologies and the end (s) of law: novel entanglements of law and technology. Edward Elgar Publishing.

Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

Review: Software Development and Reality Construction

I’ve discovered a wonderful book, Floyd et al.’s “Software Development and Reality Construction” (1992). One of the authors has made a PDF available on-line. It represents a strand of innovative thought in system design that I believe has many of the benefits of what has become “critical HCI” in the U.S. without many of its pitfalls. Is is a playful compilation with many interesting intellectual roots.

From its blurb:

The present book is based on the conference Software Development and Reality Construction held at SchloB Eringerfeld in Germany, September 25 – 30, 1988. This was organized by the Technical University of Berlin (TUB) in cooperation with the German National Research Center for Computer Science (GMD), Sankt Augustin, and sponsored by the Volkswagen Foundation whose financial support we gratefully acknowledge. The conference was an interdisciplinary scientific and cultural event aimed at promoting discussion on the nature of computer science as a scientific discipline and on the theoretical foundations and systemic practice required for human-oriented system design. In keeping with the conversational style of the conference, the book comprises a series of individual contributions, arranged so as to form a coherent whole. Some authors reflect on their practice in computer science and system design. Others start from approaches developed in the humanities and the social sciences for understanding human learning and creativity, individual and cooperative work, and the interrelation between technology and organizations. Thus, each contribution makes its specific point and can be read on its own merit. But, at the same time, it takes its place as a chapter in the book, along with all the other contributions, to give what seemed to us a meaningful overall line of argumentation. This required careful editorial coordination, and we are grateful to all the authors for bearing with us throughout the slow genesis of the book and for complying with our requests for extensive revision of some of the manuscripts.

There are a few specific reasons I’m excited about this book.

First, it is explicitly about considering software development as a designing activity that is an aspect of computer science. In many U.S. scholarly contexts, there is an educational/research thrust towards removing “user interface design” from both the theoretical roots of computer science and the applied activity of software development. This has been a problem for recent scholarly debates about, for example, the ethics of data science and AI. When your only options are a humanities oriented “design” field, and a field of computer science “algorithms”, there is no room to explore the embodied practice of software development, which is where the rubber hits the road.

Second, this book has some fascinating authors. It includes essays from Heinz von Foerster, a second-order cybernetics Original Gangster. It also includes essays from Joseph Goguen, who is perhaps the true link between computer science theory (he was a theorist and programming language designer) and second-order cybernetics (Mutarana and Varela, which would then influence Winograd and Flores’s critique of AI, but also Niklas Luhmann, which shows up in other critiques of AI from a legal perspective). Indeed, Goguen co-authored papers with Varela (1979) formalizing Varela’s notions of autonomy and autopoiesis in terms of category theory — a foundation that has had little uptake since. But this is not a fringe production. Donald Knuth, a computer science god-king, has an essay in the book about the process of creating and debugging TeX, the typesetting language. It is perhaps not possible to get deeper into the heart of the embodied practice of technical work than that. His essay begins with a poem from Piet Hein:

The road to wisdom?
Well, it’s plain
and simple to express:
Err
and err
and err again
but less
and less
and less.

The book self-recognizes its interesting intellectual lineage. The following diagram is included in Raeithel’s article “Activity theory as a foundation for design”, which stakes out a Marxist Vygotskian take on design practice. This is positioned as an extreme view, to the (literally, on the page) left of the second-order cybernetics approach, which he positions as culminating in Winograd and Flores.

It is a sweeping, thoughtful book. Any one of its essays could, if more widely read, be a remedy for the kinds of conceptual errors made in today’s technical practice which lead to “unethical” or adverse outcomes. For example, Klein and Lyytinen’s “Towards a new understanding of data modelling” swiftly rejects notions of “raw data” and instead describes a goal oriented, hermeneutic data modeling practice. What if “big data” techniques had been built on this this understanding?

The book ultimately does not take itself too seriously. It has the whimsical character that the field of computer science could have in those early days, when it was open to conceptual freedom and exploration. The book concludes with a script for a fantastic play that captures the themes and ideas of the conference as a whole:

This goes on for six pages. By the end, Alice discovers that she is in a “cyberworld”:

Oh, what fun it is. It’s a huge game that’s being played – all over this
cyberworld – if this is a world at all. How I wish I was one of them! I
wouldn’t mind being a Hacker, if only I might join – though of course I
should like to be a Cyber Queen, best.

I’ve only scratched the surface of this book. But I expect to be returning to it often in future work.

References

Christiane Floyd, Heinz Züllighoven, and Reinhard Budde, Reinhard Keil-Slawik. (1992) “Software development and reality construction.” Springer-Verlag Berlin Heidelberg. https://doi.org/10.1007/978-3-642-76817-0

Goguen, J. A., & Varela, F. J. (1979). Systems and distinctions; duality and complement arity. International Journal of General System5(1), 31-43.

Klein, H. K., & Lyytinen, K. (1992). Towards a new understanding of data modelling. In Software development and reality construction (pp. 203-219). Springer, Berlin, Heidelberg.

Raeithel, A. (1992). Activity theory as a foundation for design. In Software development and reality construction (pp. 391-415). Springer, Berlin, Heidelberg.

Reflections on the Gebru/Google dismissal

I’ve decided to write up some reactions to the dismissal of Dr. Gebru from Google’s Ethical AI team. I have hesitated thus far because the issues revolve around a particular person who has a right to privacy, because of the possible professional consequences of speaking out (this research area is part of my professional field and the parties involved, including Google’s research team and Dr. Gebru, are all of greater stature in it than myself), because there is much I don’t know about the matter (I have no inside view of the situation at Google, for example), because the facts of the case look quite messy to me, with many different issues at stake, and because the ethical issues raised by the case are substantive and difficult. It has also been a time with pressing personal responsibilities and much needed holiday rest.

I’m also very aware that one framing of the event is that it is about diversity and representation within the AI ethics research community. There are some that believe that white etc. men are over-represented in the field. Implicitly, if I write publicly about this situation, representing, as it were, myself, I am part of that problem. More on that in a bit.

Despite all of these reasons, I think it is best to write something. The event has been covered by many mainstream news outlets, and Dr. Gebru has been about as public as is possible with her take on the situation. She is, I believe, a public figure in this respect. I’ve written before on related topics and controversies within this field and have sometimes been told by others that they have found my writing helpful. As for the personal consequences to myself, I try to hold myself to a high standard of courage in my research work and writing. I wouldn’t be part of this technology ethics field if I did not.

So what do I think?

First, I think there has been a lot of thoughtful coverage of the incident by others. Here are some links to that work. So far, Hanna and Whitaker‘s take is the most forceful in its analysis of the meaning of the incident for a “crisis in AI”. In their analysis:

  • There is a crisis, which involves:
    • A mismatch between those benefiting from and creating AI — “the corporations and the primarily white male researchers and developers” — and those most likely to be harmed by AI — “BIPOC people, women, religious and gender minorities, and the poor” because of “structural barriers”. A more diverse research community is needed to “[center] the perspectives and experiences of those who bear the harms of these technologies.”
    • The close ties between tech companies and ostensibly independent academic institutions that homogenize the research community, obscure incentives, and dull what might be a more critical research agenda.
  • To address this crisis:
    • Tech workers should form an inclusive union that pushes back on Big Tech for ethical concerns.
    • Funding for independent critical research, with greater guaranteed access to company resources, should be raised through a tax on Big Tech.
    • Further regulations should be passed to protect whistleblowers, prevent discrimination, consumer privacy and the contestability of AI systems.

These lines of argument capture most of what I’ve seen more informally in Twitter conversations about this issue. As far as their practical recommendations go, I think a regulatory agency for Big Tech, analogous to the Securities Exchange Commission for the financial sector, with a federal research agency analogous to the Office of Financial Research, is the right way to go on this. I’m more skeptical about the idea of a tech workers union, but that this not the main focus of this post. This post is about Dr. Gebru’s dismissal and its implications.

I think it’s best if I respond to the situation we a series of questions.

First, was Dr. Gebru wrongfully terminated from Google? Wrongful termination is when an employer terminates a contract with an employee in retaliation for an anti-discrimination or whistleblowing action. The heart of the matter is that Dr. Gebru’s dismissal “smells like” wrongful termination: Dr. Gebru was challenging Google’s diversity programs internally; she was reporting environmental costs of AI in her research in a way that was perhaps like whistleblowing. The story is complicated by the fact that she was negotiating with Google, with the possibility of resignation as leverage, when she was terminated.

I’m not a lawyer. I have come to appreciate the importance of the legal system rather late in my research career. Part of that appreciation is of how the law has largely anticipated the ethical issues raised by “AI” already. I am surprised, however, that the phrase “wrongful termination” has not been raised in journalism covering Dr. Gebru’s dismissal. It seems like the closest legal analog. Could, say, a progressively orientated academic legal clinic help Dr. Gebru sue Google over this? Does she have a case?

These are not idle questions. If the case is to inform better legal protection of corporate AI researchers and other ‘tech workers’, then it is important to understand the limits of current wrongful termination law, whether these limits cover the case of Dr. Gebru’s dismissal, and if not, what expansions to this law would be necessary to cover it.

Second, what is corporate research (and corporate funded research) for? The field of “Ethical AI” has attracted people with moral courage and conviction who probably could be doing other things if they did not care so much. Many people enter academic research hoping that they can somehow, through their work, make the world a better place. The ideal of academic freedom is that it allows researchers to be true to their intellectual commitments, including their ethical commitments. It is probably true that “critical” scholarship survives better in the academic environment. But what is corporate research for? Should we reasonably expect a corporation’s research arm to challenge that corporation’s own agendas?

I’ve done corporate research. My priorities were pretty clear in that context: I was supposed to make the company I worked for look smart. I was supposed to develop new technical prototypes that could be rolled into products. I was supposed to do hard data wrangling and analysis work to suss out what kinds of features would be possible to build. My research could make the world a better place, but my responsibility to my employer was to make it a better place by improving our company’s offerings.

I’ve also done critical work. Critical work tends not to pay as well as corporate research, for obvious reasons. I’m mainly done this from academic positions, or as a concerned citizen writing on my own time. It is striking that Hanna and Whitaker’s analysis follows through to the conclusion that critical researchers want to get paid. Their rationale is that society should reinvest the profits of Big Tech companies into independent research that focuses on reducing Big Tech harms. This would be like levying a tax on Big Tobacco to fund independent research into the health effects of smoking. This really does sound like a good idea to me.

But this idea would sound good to me even without Dr. Gebru’s dismissal from Google. To conflate the two issues muddies the water for me. There is one other salient detail: some of the work that brought Dr. Gebru into research stardom was now well-known audits of facial recognition technology developed by IBM and Microsoft. Google happily hired her. I wonder if Google would have minded if Dr. Gebru continued to do critical audits of Microsoft and IBM from her Google position. I expect Google would have been totally fine with this: one purpose of corporate research could be digging up dirt on your competition! This implies that it’s not entirely true that you can’t do good critical work from a corporate job. Maybe this kind of opposition research should be encouraged and protected (by making Big Tech collusion to prevent such research illegal).

Third, what is the methodology of AI ethics research? There are two schools of thought in research. There’s the school of thought that what’s most important about research is the concrete research question and that any method that answers the research question will do. Then there’s the school of thought that says what’s most important about research is the integrity of research methods and institutions. I’m of the latter school of thought, myself.

One thing that is notable about top-tier AI ethics research today is the enormously broad interdisciplinary range of its publication venues. I would argue that this interdisciplinarity is not intellectually coherent but rather reflects the broad range of disciplinary and political interests that have been able to rally around the wholly ambiguous idea of “AI ethics”. It doesn’t help that key terms within the field, such as “AI” and “algorithm”, are distorted to fit whatever agenda researchers want for them. The result is a discursive soup which lacks organizing logic.

In such a confused field, it’s not clear what conditions research needs to meet in order to be “good”. In practice, this means that the main quality control and/or gatekeeping mechanism, the publishing conferences, operate through an almost anarchic process of peer review. Adjacent to this review process is the “disciplinary collapse” of social media, op-eds, and whitepapers, which serve various purposes of self-promotion, activism/advocacy, and marketing. There is little in this process to incentivize the publication of work that is correct, or to set the standards of what that would be.

This puts AI ethics researchers in a confusing position. Google, for example, can plausible set its own internal standards for research quality because the publication venues have not firmly set their own. Was Dr. Gebru’s controversial paper up to Google’s own internal publication standards, as Google has alleged? Or did they not want their name on it only because it made them look bad? I honestly don’t know. But even though I have written quite critically about corporate AI “ethics” approaches before, I actually would not be surprised if a primarily “critical” researcher did not do a solid literature review of the engineering literature on AI energy costs before writing a piece about it, because the epistemic standards of critical scholarship and engineering are quite different.

There has been a standard floated implicitly or explicitly by some researchers in the AI ethics space. I see Hanna and Whitaker as aligned with this standard and will borrow their articulation. In this view, the purpose of AI ethics research is to surface the harms of AI so that they may be addressed. The reason why these harms are not obvious to AI practitioners already is the lack of independent critical scholarship by women, BIPOC, the poor, and other minorities. Good AI ethics work is therefore work done by these minorities such that it expresses their perspective, critically revealing faults in AI systems.

Personally, I have a lot of trouble with this epistemic standard. According to it, I really should not be trying to work on AI ethics research. I am simply, by fault of my subject position, unable to do good work. Dr. Gebru, a Black woman, on the other hand, will always do good work according to this standard.

I want to be clear that I have some of Dr. Gebru’s work and believe it deserves all of its accolades for reasons that are not conditional on her being a Black woman. I also understand why her subject position has primed her to do the kind of work that she has done; she is a trailblazer because of who she is. But if the problem faced by the AI ethics community is that its institutions have blended corporate and academic research interests so much that the incentives are obscure and the playing field benefits the corporations, who have access to greater resources and so on, then this problem will not be solved by allowing corporations to publish whatever they want as long as the authors are minorities. This would be falling into the trap of what Nancy Fraser calls progressive neoliberalism, which incentivizes corporate tokenization of minorities. (I’ve written about this before.)

Rather, the way to level the playing field between corporate research and independent or academic research is to raise the epistemic standard of the publication venues in a way that supports independent or academic research. Hanna and Whitaker argue that “[r]esearchers outside of corporate environments must be guaranteed greater access to technologies currently hidden behind claims of corporate secrecy, such as access to training data sets, and policies and procedures related to data annotation and content moderation.” Nobody, realistically, is going to guarantee outside researchers access to corporate secrets. However, research publication venues (like conferences) can change their standards to mandate open science practices: access to training data sets, reproducibility of results, no dependence on corporate secrets, and so on.

A tougher question for AI ethics research in particular is the question of how to raise epistemic standards for normative research in a way that doesn’t beg the question on interpretations of social justice or devolve into agonistic fracturing on demographic grounds. There are of course academic disciplines with robust methods for normative work; they are not always in communication with each other. I don’t think there’s going to be much progress in the AI ethics field until a sufficient synthesis of feminist epistemology and STEM methods has been worked out. I fear that is not going to happen quickly because it would require dropping some of what’s dear to situated epistemologies of the progressive AI ethics wing. But I may be wrong there. (There was some work along these lines by methodologists some years ago under the label “Human-Centered Data Science”.)

Lastly, whatever happened to the problem of energy costs of AI, and climate change? To me, what was perhaps most striking about the controversial paper at the heart of Dr. Gebru’s dismissal was that it wasn’t primarily about representation of minorities. Rather, it was (I’ve heard–I haven’t read the paper yet) about energy costs of AI, which is something that, yes, even white men can be concerned about. If I were to give my own very ungenerous, presumptuous, and truly uninformed interpretation of what the goings-on at Google were all about, I would put it this way: Google hired Dr. Gebru to do progressive hit pieces on competitor’s AI products like she had done for Microsoft and IBM, and to keep the AI ethics conversation firmly in the territory of AI biases. Google has the resources to adjust its models to reduce these harms, get ahead of AI fairness regulation, and compete on wokeness to the woke market segments. But Dr. Gebru’s most recent paper reframes the AI ethics debate in terms of a universal problem of climate change which has a much broader constituency, and which is actually much closer to Google’s bottom line. Dr. Gebru has the star power to make this story go mainstream, but Google wants to carve out its own narrative here.

It will be too bad if the fallout of Dr. Gebru’s dismissal is a reversion of the AI ethics conversation back to the well-trod questions of researcher diversity, worker protection, and privacy regulation, when the energy cost and climate change questions provide a much broader base of interest from which to refine and consolidate the AI ethics community. Maybe we should be asking: what standards should conferences be holding researchers to when they make claims about AI energy costs? What are the standards of normative argumentation for questions of carbon emission, which necessarily transcend individual perspectives, while of course also impacting different populations disparately? These are questions everybody should care about.

We need a theory of collective agency to guide data intermediary design

Last week Jake Goldenfein and I presented some work-in-progress to the Centre for Artificial Intelligence and Digital Ethics (CAIDE) at the University of Melbourne. The title of the event was “Data science and the need for collective law and ethics”; perhaps masked by that title is the shift we’re taking to dive into the problem of data intermediaries. I wanted to write a bit about how we’re thinking about these issues.

This work builds on our work “Data Science and the Decline of Liberal Law and Ethics“, which was accepted by a conference that was then canceled due to COVID-19. In retrospect, it’s perhaps for the best that the conference was canceled. The “decline of liberalism” theme fit the political moment when we wrote the piece, when Trump and Sanders were contenders for the presidency of the U.S, and authoritarian regimes appeared to be providing a new paradigm for governance. Now, Biden is the victor and it doesn’t look like liberalism is going anywhere. We must suppose that our project will take place in a (neo)liberal context.

Our argument in that work was that many of the ideas animating the (especially Anglophone) liberalism we see in the U.S., the U.K., and Australia legal systems have been inadequate to meaningfully regulate artificial intelligence. This is because liberalism imagines a society of rational individuals appropriating private property through exchanges on a public market and acting autonomously, whereas today we have a wide range of agents with varying levels of bounded rationality, many of which are “artificial” in Herbert Simon’s sense of being computer-enabled firms, tied together in networks of control, not least of these being privately owned markets (the platforms). Essentially, loopholes in liberalism have allowed a quite different form of sociotechnical ordering to emerge because that political theory did not take into account a number of rather recently discovered scientific truths about information, computing, and control. Our project is to tackle this disconnect between theory and actuality, and to try to discover what’s next in terms of a properly cybernetic political theory that advances the goal of human emancipation.

Picking up where our first paper left off, this has gotten us looking at data intermediaries. This is an area where there has been a lot of work! We were particularly inspired by Mozilla’s Data Futures review of different forms of data intermediary institutions, including data coops, data trusts, data marketplaces, and so on. There is a wide range of ongoing experiments with alternative forms of “data stewardship” or “data governance”.

Our approach has been to try to frame and narrow down the options based on normative principles, legal options, and technical expertise. Rather than asking empirically what forms of data governance have been attempted, we are wondering: what ought the goals of a data intermediary be, given the facts about cybernetic agency in the world we live? How could such an institution accomplish what has been lost by the inadequacies of liberalism?

Our thinking has led us to the position that what has prevented liberalism from regulating the digital economy is its emphasis on individual autonomy. We draw on the new consensus in privacy scholarship that individual “notice and choice” is an ineffective way to guarantee consumer protection in the digital economy. Not only are bounded rationality constraints on consumers preventing them from understanding what they are agreeing to, but also the ability of firms to control consumer’s choice architecture has dwarfed the meaningfulness of whatever rationality individuals do have. Meanwhile, it is now well understood (perhaps most recently by Pistor (2020)) that personal data is valuable only when it is cleaned and aggregated. This makes the locus of economic agency around personal data necessarily a collective one.

This line of inquiry leads us to a deep question to which we do not yet have a ready answer, which is “What is collective emancipation in the paradigm of control?” Meaning, given what we know about the “sciences of the artificial”, control theory, theory of computation and information, etc., with all of its challenges to the historical idea of the autonomous liberal agent, what does it mean for a collective of individuals to be free and autonomous?

We got a lot of good feedback on our talk, especially from discussant Seth Lazar, who pointed out that there are many communitarian strands of liberalism that we could look to for normative guides. He mentioned, for example, Elizabeth Anderson’s relational egalitarianism. We asked Seth whether he thought that the kind of institution that guaranteed the collective autonomy of its members would have to be a state, and he pointed out that that was a question of whether or not such a system would be entitled to use coercion.

There’s a lot to do on this project. While it is quite heady and philosophical, I do not think that it is necessarily only an abstract or speculative project. In a recent presentation by Vincent Southerland, he proposed that one solution to the problematic use of algorithms in criminal sentencing would be if “the community” of those advocating for equity in the criminal justice system operated their own automated decision systems. This raises an important question: how could and should a community govern its own a technical systems, in order to support what in Southerland’s case is an abolitionist agenda. I see this as a very aligned project.

There is also a technical component to the problem. Because of economies of scale and the legal climate, more and more computation is moving onto proprietary cloud systems. Most software now is provided “as a service”. It’s unclear what this means for organizations that would try to engage in self-governance, even when these organizations are autonomous state entities such as municipalities. In some conversations, we have considered what modifications of the technical ideas of the “user agent”, security firewalls and local networks, and hybrid cloud infrastructure would enable collective self-governance. This is the pragmatic “how?” that follows our normative “what?” and “why?” question but it is no less important to implementing a prototype solution.

References

Benthall, Sebastian and Goldenfein, Jake, Data Science and the Decline of Liberal Law and Ethics (June 22, 2020). Available at SSRN: https://ssrn.com/abstract=3632577 or http://dx.doi.org/10.2139/ssrn.3632577

Narayanan, A., Toubiana, V., Barocas, S., Nissenbaum, H., & Boneh, D. (2012). A critical look at decentralized personal data architectures. arXiv preprint arXiv:1202.4503.

Pistor, K. (2020). Rule by data: The end of markets?. Law & Contemp. Probs.83, 101.

Regulating infoglut?

In the 20’s, many people were attracted for the first time in investing in the stock market. It was a time when fortunes were made and lost, but made more than they were lost, and so on average investors saw large returns. However, the growth in value of stocks was driven in part, and especially in the later half of the decade, by debt. The U.S. Federal Reserve chose to lower interest rates, making it easier to borrow money. When the interest rates on loans were lower than the rates of return on stocks, everybody from households to brokers began to take on debt to reinvest in the stock market. (Brooks, 1999)

After the crash of ’29, which left the economy decimated, there was a reckoning, leading to the Securities Act of 1933 and the Securities Exchange Act of 1934. The latter established the Securities and Exchange Commission (SEC), and established the groundwork for the more trusted financial institutions we have today.

Cohen (2016) writes about a more current economic issue. As the economy changes from being centered on industrial capitalism to informational capitalism, the infrastructural affordances of modern computing and networking have invalidated the background logic of how many regulations are supposed to work. For example, anti-discrimination regulation is designed to prevent decisions from being made based on protected or sensitive attributes of individuals. However, those regulations made most sense when personal information was relatively scarce. Today, when individual activity is highly instrumented by pervasive computing infrastructure, we suffer from infoglut — more information than is good for us, either as individuals or as a society. As a consequence, proxies of protected attributes are readily available for decision-makers and indeed are difficult to weed out of a machine learning system even when market actors fully intend to do so (see Datta et al., 2017). In other words, the structural conditions that enable infoglut erode rights that we took for granted in the absence of today’s network and computing systems.

In an ongoing project with Salome Viljoen, we are examining the parallels between the financial economy and the data economy. These economies are, of course, not fully distinct. However, they are distinguished in part by how they are regulated: the financial economy has over a century of matured regulations defining it and reducing system risks such as those resulting from a debt-financed speculative bubble; the data economy has emerged only recently as a major source of profit with perhaps unforeseen systemic risks.

We have an intuition that we would like to pin down more carefully as we work through these comparisons: that there is something similar about the speculative bubbles that led to the Great Depression and today’s infoglut. In a similar vein to prior work looking that uses regulatory analogy to motivate new thinking about data regulation (Hirsch, 2013; Froomkin, 2015) and professional codes (Stark and Hoffman, 2019), we are interested in how financial regulation may be a precedent for regulation of the data economy.

However, we have reason to believe that the connections between finance and personal data are not merely metaphorical. Indeed, finance is an area with well-developed sectoral privacy laws that guarantee the confidentiality of personal data (Swire, 2003); it is also the case that financial institutions are one of the many ways personal data originating from non-financial contexts is monetized. We do not have to get poetic to see how these assets are connected; they are related as a matter of fact.

What is more elusive, and at this point only a hypothesis, is that there is valid sense in which the systemic risks of infoglut can be conceptually understood using tools similar to those that are used to understand financial risk. Here I maintain an ambition: that systemic risk due to infoglut may be understood using the tools of macroeconomics and hence internalized via technocratic regulatory mechanisms. This would be a departure from Cohen (2016), who gestures more favorably towards “uncertainty” based regulation that does not attempt probabilistic expectation but rather involves tools such as threat modeling, as used in some cybersecurity practices.

References

Brooks, J. (1999). Once in Golconda: A true drama of Wall Street 1920-1938. John Wiley & Sons.

Cohen, J. E. (2016). The regulatory state in the information age. Theoretical Inquiries in Law17(2), 369-414.

Datta, A., Fredrikson, M., Ko, G., Mardziel, P., & Sen, S. (2017, October). Use privacy in data-driven systems: Theory and experiments with machine learnt programs. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (pp. 1193-1210).

Froomkin, A. M. (2015). Regulating Mass Surveillance as Privacy Pollution: Learning from Environmental Impact Statements. U. Ill. L. Rev., 1713.

Hirsch, D. D. (2013). The glass house effect: Big Data, the new oil, and the power of analogy. Me. L. Rev.66, 373.

Stark, L., & Hoffmann, A. L. (2019). Data is the new what? Popular metaphors & professional ethics in emerging data culture.

Swire, P. P. (2003). Efficient confidentiality for privacy, security, and confidential business information. Brookings-Wharton Papers on Financial Services2003(1), 273-310.

Surden, H. (2007). Structural rights in privacy. SMUL Rev.60, 1605.

double contingency and technology

One of the best ideas to come out of the social sciences is “double contingency”: the fact that two people engaged in communication are in a sense unpredictable to each other. That mutual unpredictability is an element of what it means to be in communication with another.

The most recent articulation of this idea is from Luhmann, who was interested in society as a system of communication. Luhmann is not focused on the phenomenology of the participants in a social system; in as sense, he looks like social systems the way an analyst might look at communications data from a social media site. The social system is the set of messages. Luhmann is an interesting figure in intellectual history in part because he is the one who made the work of Maturana and Varela officially part of German philosophical canon. That’s a big deal, as Maturana and Varela’s intellectual contributions–around the idea of autopoiesis, for example–were tremendously original, powerful, and good.

“Double contingency” was also discussed, one reads, by Talcott Parsons. This does not come up often because at some point the discipline of Sociology just decided to bury Parsons.

Double contingency comes up in interesting ways in European legal scholarship about technology. Luhmann, a dense German writer, is not read much in the United States, despite his being essentially right about things. Hildebrandt (2019) uses double contingency in her perhaps perplexingly framed argument for the “incomputability” of human personhood. Teubner (2006) makes a somewhat different but related argument about agency, double contingency, and electronic agents.

Hildebrandt and Teubner make for an interesting contrast. Hildebrandt is interested in the sanctity of humanity qua humanity, and in particular of privacy defined as the freedom to be unpredictable. This is an interesting inversion for European phenomenological philosophy. Recall that originally in European phenomenology human dignity was tied to autonomy, but autonomy depended on universalized rationality, with the implication that the most important thing about human dignity was that one followed universal moral rules (Kant). Hildebrandt is almost staking out an opposite position: that Arendtian natality, the unpredictableness of being an original being at birth, is the source of one’s dignity. Paradoxically, Hildebrandt argues that it humanity has this natality essentially and so claims that predictive technology might truly know the data subject are hubris, but also that the use of these predictive technologies is threat to natality unless their use is limited by data protection laws that ensure contestability of automated decisions.

Teubner (2006) takes a somewhat broader and, in my view, more self-consistent view. Grounding his argument firmly in Luhmann and Latour, Teubner is interested in the grounds of legally recognized (as opposed to ontologically, philosophically sanctified) personhood. And, he finds, the conditions of personhood can apply to many things besides humans! “Black box, double contingency, and addressability”, three fictions on which the idea of personhood depend, can apply to corporations and electronic agents as well as humans individually. This provides a kind of consistency and rationale for why we allow these kinds of entities to engage in legal contracts with each other. The contract, it is theorized, is a way of managing uncertainty, reducing the amount of contingency in the inherent “double contingency”-laden relationship.

Something of the old Kantian position comes through in Teubner, in that contracts and the law are regulatory. However, Teubner, like Nissenbaum, is ultimately a pluralist. Teubner writes about multiple “ecologies” in which the subject is engaged, and to which they are accountable in different modalities. So, the person, qua economic agent, is addressed in terms of their preferences. But the person, qua legal institutions, is addressed in terms of their embodiment of norms. The “whole person” does not appear in any singular ecology.

I’m sympathetic with the Teubnerian view here, perhaps in contrast with Hildebrandt’s view, the the following sense: while there may indeed be some intrinsic indeterminacy to an individual, this indeterminacy is meaningless unless it is also situated in (some) social ecology. However, what makes a person contingent visa vie one ecology is precisely that only a fragment of them is available to that ecology. The contingency to the first ecology is a consequence of their simultaneous presence within other ecologies. The person is autonomous, and hence also unpredictable, because of this multiplied, fragmented identity. Teubner, I think correctly, concludes that there is a limited form of personhood to non-human agents, but as these agents will be even more fragmented than humans, they are only persons in an attenuated sense.

I’d argue that Teubner helpfully backfills how personhood is socially constructed and accomplished, as opposed to guaranteed from birth, in a way that complements Hildebrandt nicely. In the 2019 article cited here, Hildebrandt argues for contestability of automated decisions as a means of preserving privacy. Teubner’s theory suggests that personhood–as participant in double contingency, as a black box–is threatened rather by context collapse, or the subverting of the various distinct social ecologies into a single platform in which data is shared ubiquitously between services. This provides a normative a universalist defense of keeping contexts separate (which in a different article Hildebrandt connects to purpose binding in the GDPR) which is never quite accomplished in, for example, Nissenbaum’s contextual integrity.

References

Hildebrandt, Mireille. “Privacy as protection of the incomputable self: From agnostic to agonistic machine learning.” Theoretical Inquiries in Law 20.1 (2019): 83-121.

Teubner, Gunther. “Rights of non‐humans? Electronic agents and animals as new actors in politics and law.” Journal of Law and Society 33.4 (2006): 497-521.

System 2 hegemony and its discontents

Recent conversations have brought me back to the third rail of different modalities of knowledge and their implications for academic disciplines. God help me. The chain leading up to this is: a reminder of how frustrating it was trying to work with social scientists who methodologically reject the explanatory power of statistics, an intellectual encounter with a 20th century “complex systems” theorist who also didn’t seem to understand statistics, and the slow realization that’s been bubbling up for me over the years that I probably need to write an article or book about the phenomenology of probability, because I can’t find anything satisfying about it.

The hypothesis I am now entertaining is that probabilistic or statistical reasoning is the intellectual crux, disciplinarily. What we now call “STEM” is all happy to embrace statistics as its main mode of empirical verification. This includes the use of mathematical proof for “exact” or a priori verification of methods. Sometimes the use of statistics is delayed or implicit; there is qualitative research that is totally consistent with statistical methods. But the key to this whole approach is that the fields, in combination, are striving for consistency.

But not everybody is on board with statistics! Why is that?

One reason may be because statistics is difficult to learn and execute. Doing probabilistic reasoning correctly is at times counter-intuitive. That means that quite literally it can make your head hurt to think about it.

There is a lot of very famous empirical cognitive psychology that has explored this topic in depth. The heuristics and biases research program of Kahneman and Tversky was critical for showing that human behavior rarely accords with decision-theoretic models of mathematical, probabilistic rationality. An intuitive, “fast”, prereflective form of thinking, (“System 1”) is capable of making snap judgments but is prone to biases such as the availability heuristic and the representativeness heuristic.

A couple general comments can be made about System 1. (These are taken from Tetlock’s review of this material in Superforecasting). First, a hallmark of System 1 is that it takes whatever evidence it is working with as given; it never second-guesses it or questions its validity. Second, System 1 is fantastic at provided verbal rationalizations and justifications of anything that it encounters, even when these can be shown to be disconnected from reality. Many colorful studies of split brain cases, but also many other lab experiments, show the willingness people have to make of stories to explain anything, and their unwillingness to say, “this could be due to one of a hundred different reasons, or a mix of them, and so I don’t know.”

The cognitive psychologists will also describe a System 2 cognitive process that is more deliberate and reflective. Presumably, this is the system that is sometimes capable of statistical or otherwise logical reasons. And a big part of statistical reasoning is questioning the source of your evidence. A robust application of System 2 reasoning is capable of overcoming System 1’s biases. At the level of institutional knowledge creation, the statistical sciences are comprised mainly of formalized, shared results of System 2 reasoning.

Tetlock’s work, from Expert Political Judgment and on, is remarkable for showing that deference to one or the other cognitive system is to some extent a robust personality trait. Famously, those of the “hedgehog” cognitive style, who apply System 1 and a simplistic theory of the world to interpret everything they experience, are especially bad at predicting the outcomes of political events (what are certainly the results of ‘complex systems’), whereas the “fox” cognitive style, which is more cautious about considering evidence and coming to judgments, outperforms them. It seems that Tetlock’s analysis weighs in favor of System 2 as a way of navigating complex systems.

I would argue that there are academic disciplines, especially those grounded in Heideggerian phenomenology, that see the “dominance” of institutions (such as academic disciplines) that are based around accumulations of System 2 knowledge as a problem or threat.

This reaction has several different guises:

  • A simple rejection of cognitive psychology, which has exposed the System 1/System 2 distinction, as “behaviorism”. (This obscures the way cognitive psychology was a major break away from behaviorism in the 50’s.)
  • A call for more “authentic experience”, couched in language suggesting ownership or the true subject of one’s experience, contrasting this with the more alienated forms of knowing that rely on scientific consensus.
  • An appeal to originality: System 2 tends to converge; my System 1 methods can come up with an exciting new idea!
  • The interpretivist methodological mandate for anthropological sensitivity to “emic”, or directly “lived experience”, of research subjects. This mandate sometimes blurs several individually valid motivations, such as: when emic experience is the subject matter in its own right, but (crucially) with the caveat that the results are not generalizable; when emic sensitivity is identified via the researcher’s reflexivity as a condition for research access; or when the purpose of the work is to surface or represent otherwise underrepresented views.

There are ways to qualify or limit these kinds of methodologies or commitments that makes them entirely above reproach. However, under these limits, their conclusions are always fragile. According to the hegemonic logic of System 2 institutions, a consensus of those thoroughly considering the statistical evidence can always supercede the “lived experience” of some group or individual. This is, at the methodological level, simply the idea that while we may make theory-laden observations, when those theories are disproved, those observations are invalidated as being influenced by erronenous theory. Indeed, mainstream scientific institutions take as their duty this kind of procedural objectivity. There is no such thing as science unless a lot of people are often being proven wrong.

This provokes a great deal of grievance. “Who made scientists, an unrepresentative class of people and machines disconnected from authentic experience, the arbiter of the real? Who are they to tell me I am wrong, or my experiences invalid?” And this is where we start to find trouble.

Perhaps most troubling is how this plays out at the level of psychodynamic politics. To have one’s lived experiences rejected, especially those lived experiences of trauma, and especially when those experiences are rejected wrongly, is deeply disturbing. One of the more mighty political tendencies of recent years has been the idea that whole classes of people are systematically subject to this treatment. This is one reason, among others, for influential calls for recalibrating the weight given to the experiences of otherwise marginalized people. This is what Furedi calls the therapeutic ethos of the Left. This is slightly different from, though often conflated with, the idea that recalibration is necessary to allow in more relevant data that was being otherwise excluded from consideration. This latter consideration comes up in a more managerialist discussion of creating technology that satisfies diverse stakeholders (…customers) through “participatory” design methods. The ambiguity of the term “bias”–does it mean a statistical error, or does it mean any tendency of an inferential system at all?–is sometimes leveraged to accomplish this conflation.

It is in practice very difficult to disentangle the different psychological motivations here. This is partly because they are deeply personal and mixed even at the level of the individual. (Highlighting this is why I have framed this in terms of the cognitive science literature). It is also partly because these issues are highly political as well. Being proven right, or wrong, has material consequences–sometimes. I’d argue: perhaps not as often as it should. But sometimes. And so there’s always a political interest, especially among those disinclined towards System 2 thinking, in maintaining a right to be wrong.

So it is hypothesized (perhaps going back to Lyotard) that at an institutional level there’s a persistent heterodox movement that rejects the ideal of communal intellectual integrity. Rather, it maintains that the field of authoritative knowledge must contain contradictions and disturbances of statistical scientific consensus. In Lyotard’s formulation, this heterodoxy seeks “legitimation by paralogy”, which suggests that its telos is at best a kind of creative intellectual emancipation from restrictive logics, generative of new ideas, but perhaps at worst a heterodoxy for its own sake.

This tendency has an uneasy relationship with the sociopolitical motive of a more integrated and representative society, which is often associated with the goal of social justice. If I understand these arguments directly, the idea is that, in practice, legitimized paralogy is a way of giving the underrepresented a platform. This has the benefits of increasing, visibly, representation. Here, paralogy is legitimized as a means of affirmative action, but not as a means improving system performance objectively.

This is a source of persistent difficulty and unease, as the paralogical tendency is never capable of truly emancipating itself, but rather, in its recuperated form, is always-already embedded in a hierarchy that it must deny to its initiates. Authenticity is subsumed, via agonism, to a procedural objectivity that proves it wrong.

Looking for references: phenomenology of probability

A number of lines of inquiry have all been pointing in the same direction for me. I now have a question and I’m on the lookout for scholarly references on it. I haven’t been able to find anything useful through my ordinary means.

I’m looking for a phenomenology of probability.

Hopefully the following paragraphs will make it clearer what I mean.

By phenomenology, I mean a systematic account (-ology) of lived experience (phenomen-). I’m looking for references especially in the “cone” of influences on Merleau-Ponty, and the “cone” of those influenced by Merleau-Ponty.

By probability, I mean the whole gestalt of uncertainty, expectation, and realization that is normally covered by the mathematical subject. The simplest example is the experience of tossing a coin. But there are countless others; this is a ubiquitous mode of phenomenon.

There is at least some indication that this phenomenon is difficult to provide a systematic account for. Probabilistic reasoning is not a very common skill. Perhaps the best account of this that I can think of is in Philip Tetlock’s Superforecasting, in which he reports that a large proportion of people are able to intuit only two kinds of uncertainty (“probably will happen” or “probably won’t happen”), another portion can reason in three (“probably will”, “probably won’t”, and “I don’t know”). For some people, asking for graded expectations (“I think there’s a 30% chance it will happen”) is more or less meaningless.

Nevertheless, all the major quantitative institutions–finance, telecom, digital services, insurance, the hard sciences, etc.–thrive on probabilistic calculations. Perhaps there’s a concentration here.

The other consideration leading towards the question of phenomenology of probability is the question of the interpretation of mathematical probability theory. As is well known, the same mathematics can be interpreted in multiple ways. There is an ‘objective’, frequentist interpretation, according to which probability is the frequency of events in the world. But with the rise of machine learning ‘subjectivist’ or Bayesian interpretations became much more popular. Bayesian probability is a calculus of rational subjective expectations, and transformation of those expectations, according to new evidence.

So far in my studies and research, I’ve never encountered a synthesis of Merleau-Pontean phenomenology with the subjectivist intepretation of probability. This is somewhat troubling.

Is there a treatment of this anywhere?