Digifesto

Category: Uncategorized

Luhmann, Social Systems, Cha. 1 § I

Niklas Luhmann (1927-1998) was a German sociologist who aimed to understand society in terms of systems theory.

I am reading Luhmann’s Social Systems (1995) because I have a hunch that this theory is relevant to my research. This post contains notes about Chapter 1, section I-II.

Often, scientists need to sacrifice intelligibility for accuracy. Luhmann is a scientist. He is unapologetic about this. He opens his “Instead of a Preface to the English Edition” (actual title) with:

“This is not an easy book. It does not accommodate those who prefer a quick and easy read, yet do not want to die without a taste of systems theory. This holds for the German text, too. If one seriously undertakes to work out a comprehensive theory of the social and strives for sufficient conceptual precision, abstraction and complexity in the conceptual infrastructure are unavoidable.”

Why bother reading such a difficult book? Why be a scientist and study social systems?

One reason to study society scientifically is to design and build better smart digital infrastructure.

Most people designing and building smart digital infrastructure today are not studying Luhmann. They are studying computer science. That makes sense: computer science is a science of smart digital artifacts. What has become increasingly apparent in recent years is that smart digital infrastructure is having an impact on society, and that the infrastructure is often mismatched to its social context. These mismatches are often considered to be a problem. Hence, a science of society might inform better technical designs.

§I

Chapter 1 opens with:

The following considerations assume that there are systems. Thus they do not begin with epistemological doubt. They also do not advocate a “purely analytical relevance” for systems theory. The most narrow interpretation of systems theory as a mere method of analyzing reality is deliberately avoided. Of course, one must never confuse statements with their objects; one must realize that statements are only statements and that scientific statements are only scientific statements. But, as least in systems theory, they refer to the real world. Thus the concept of system refers to something that is in reality a system and thereby incurs the responsibility of testing its statements against reality.

This is a great opening. It is highly uncommon for work in the social sciences to begin this way. Today, social science is almost always taught in theoretically pluralistic way. The student is taught several different theories of the same phenomenon. As they specialize into a social scientific discipline, they are taught to reproduce that discipline by citing its canonical thinkers and apply its analytical tools to whatever new phenomenon presents itself.

Not so with Luhmann. Luhmann is trying to start from a general scientific theory — systems theory — that in principle applies to physical, biological, and other systems, and to apply it to social systems. He cites Talcott Parsons, but also Herbert Simon, Ludwig von Bertalanffy, and Humberto Maturana. Luhmnn is not interested in reproducing a social scientific field; he is interested in reproducing the scientific field of systems theory in the domain of social science.

So the book is going to:

  • Be about systems theory in general
  • Address how social systems are a kind of system
  • Address how social systems relate to other kinds of system

There is a major challenge to studying this book in 2021. That challenge is that “systems theory” is not a mainstream scientific field today, and that people that do talk about “systems” normally do so in the context of “systems engineering”, to study and design industrial processes for example. They have their own quantitative discipline and methodologies that has little to do with sociology. Computer scientists, meanwhile, will talk about software systems and information systems, but normally in a way that has nothing to do with “systems theory” or systems engineering in a mechanical sense. Hazarding a guess, I would say that this has something to do with the cybernetics/AI split in the second half of the 20th century.

There is now a great deal of convergence in mathematical notation and concepts between different STEM fields, in part because much of the computational tooling has become ubiquitous. Computational social science has made great strides in recent years as a result. But many computational social science studies apply machine learning techniques to data generated by a social process, despite the fact that nobody believes the model spaces used in machine learning contain a veridical model of society.

This has led to many of the ethical and social problems with “AI”. Just for brief example, it is well known that estimating fitness via regression for employment or parole from personal information is, even when sensitive categories are excluded, likely to reproduce existing societal biases extant in the data through proxy variables in the feature set. A more subtle causal analysis can perhaps do better, but the way causality works at a societal level is not straightforward. See Lily Hu’s discussion of this topic, for some deeper analysis. Understanding the possible causal structures of society, including the possibility of “bottom-up” emergent effects and “downward causation” effects from social structures, would potentially improve the process of infrastructure design, whether manual or automated (via machine learning).

With this motive in mind, we will continue to slowly analyze and distill Luhmann in search for relevant insights.

For Luhmann, “systems theory … claims universal validity for everything that is a system.” Implicitly, systems theory has perfect internal validity. Luhmann expresses this theory in German, originally. But it really feels like there should be a mathematization of this work. He does not cite one yet, but the spoiler is that he’s eventually going to use George Spencer-Brown’s Laws of Form. For reasons I may get into later if I continue with this project, I believe that’s an unfortunate choice. I may have to find a different way to do the mathematization.

Rather, he follows through on his commitment to the existence of real systems by inferring some necessary consequences of that first principle. He is not content with a mathematical representation; systems theory must have “a real reference to the world”; “it is forced to treat itself as one of its objects in order to compare itself with others among those objects”. The crux is that systems theory, being a system itself, has to be able to take itself into account from the start. Hence, the commitment to real systems entails the realness of self-referential systems. “This means … there are systems that have the ability to establish relations with themselves and to differentiate these relations from relations with their environment.”

We are still in §I, which is itself a sort of preamble situating systems theory as a scientific theory, but already Luhmann is exposing the substance of the theory; in doing so, he demonstrates how truly self-referential — and consistently so — systems theory is. As he’ll say more definitively later, one essential feature of a system is that it is different from its environment. A system has, in effect, an “inside”, and also an “outside”. Outside the system is the environment. The part of the system that separates the inside of the system and its environment is the boundary. This binary aspect of the system (the system, and the not-the-system (environment)) clarifies the logic of ‘self-reference’. Self-referential systems differentiate between themselves and not-themselves.

So far, you have perhaps noted that Luhmann is a terribly literal writing. It is no surprise that the focus of his book, Social Systems, is that subset of systems that are “social”. What are these systems like? What makes them different from organisms (also systems), or systems of machines? Luhmann eschews metaphor — a bold choice. “[W]e do not choose the shortcut of analogy, but rather the longer path of generalization and respecification.” We don’t want to be misled by analogies.

“Above all, we will have to emphasize the nonpsychic character of social systems.”

That’s something Luhmann says right after saying he doesn’t want to use metaphors when talking about social systems. What can this possibly mean? It means, among other things, that Luhmann is not interested in anybody’s subjective experience of a society as an account of what a social system is. A “psychic system”, like my lived experience, or yours, is not the same thing as the social system — though, as we will later read, psychic systems are “structurally coupled” with the social system in important ways. Rather, the social system is constituted, objectively, by the communications between people. This makes it a more ready object of science.

It is striking to me that Luhmann is not more popular among analysts of social media data, because at least superficially he seems to be arguing, in effect, the social system of Twitter is not the system of Twitter’s users. Rather, it’s the system of the tweets. That’s one way of looking at things, for sure. Somewhat abashedly, I will say that Luhmann is an interesting lens through which to view Weird Twitter, which you may recall as a joke-telling subculture of Twitter that was popular before Former President Trump made Twitter much, much weirder. I think there’s some interesting comparisons to be drawn between Anthony Cohen’s theory of the symbolic construction of community, complete with symbolic boundary, and Luhmann’s notion of the boundary of a social system. But I digress.

Luhmann hasn’t actually used the word “communication” yet. He instead says “social contact”. “Every social contact is understood as a system, up to and including society as the inclusion of all possible contacts.” Possible contacts. Meaning that the system is defined in part by its unrealized but potential states. It can be stochastic; it can be changing its internal states to adapt to the external environment. “In other words, the general theory of social systems claims to encompass all sociology’s potential topics and, in this sense, to be a universal sociological theory.” Universal sociological theories are terribly unpopular these days. But Luhmann attempted it. Did he succeed?

“Yet, a claim to universality is not a claim to exclusive correctness, to the exclusive validity, and thus necessity (noncontingency), of one’s own account.” Nobody claiming to have a universal theory does this. Indeed, a theory learns about its own contingency through self-reference. So, social systems theory discovers it European origins, for example, as soon as it considers itself. What then? At that point, one “distinguish[es] between claims of universality and claims to exclusivity”, which makes utter sense, or “by recognizing that structural contingencies must be employed as an operative necessity, with the consequence that there is a constant contingency absorbtion through the successes, practices, and commitments in the scientific system.”

Contingency absorbtion is a nice idea. It is perhaps associated with the idea of abstraction: as one accumulates contingent experiences and abstracts from them, one discovers necessary generalities which are true for all contingent experiences. This has been the core German philosophical method for centuries, and it is quite powerful. We seem to have completely forgotten it in the American academic system. That is why the computer scientists have taken over everything. They have a better universalizing science than the sociologists do. Precisely for that reason, we are seeing computational systems in constant and irksome friction with society. American sociologists need to stop insisting on theoretical pluralism and start developing a universal sociology that is competitive, in terms of its universality, with computer science, or else we will never get smart infrastructure and AI ethics right.

References

Luhmann, N. (1995). Social systems. Stanford University Press.

Social justice, rationalism, AI ethics, and libertarianism

There has been a lot of drama on the Internet about AI ethics, and that drama is in the mainstream news. A lot of this drama is about the shakeup in Google’s AI Ethics team. I’ve already written a little about this. I’ve been following the story since, and I would adjust my emphasis slightly. I think the situation is sadder than I did at first. But my take (and I’m sticking to it) is that what’s missing from the dominant narrative in that story, that it is about race and gender representation in tech, (a view quite well articulated in academic argot by Tao and Varshney (2021), by the way) is an analysis of the corporate firm as an organization, what it means to be an employee of a firm, and the relationship between firms and artificial intelligence.

These questions about the corporate firm are not topics that get a lot of traction on the Internet. The topics that get a lot of traction on the Internet are race and gender. I gather that this is something anybody with a blog, (let alone publishing news), discovers. I write this blog for an imagined audience of about ten people. It is mainly research notes to myself. But for some reason, this post about racism got 125 views this past week. For me, that’s a lot of views. Who are those people?

The other dramatic thing going in my corner of the Internet this past week is also about race and gender, and also maybe AI ethics. It is the reaction to Cade Metz’s NYT article “Silicon Valley’s Safe Space“, which is about the blog Slate Star Codex (SSC), rationalist subculture, libertarianism and … racism and sexism.

I learned about this kerfuffle in a backwards way. I learned about it through Glen Weyl’s engagement with SSC about technocracy, which I suppose he was bumping to ride on the shockwave created by the NYT article. In 2019, Weyl posted a critique of “technocracy” which was also a rather pointed attack on the rationalism community, in part because of its connection to “neoreaction”. SSC responded rather adroitly, in my opinion; Weyl responded. It’s an interesting exchange about political theory.

I follow Weyl on Twitter because I disagree with him rather strongly on niche topics in data regulation. As a researcher, I’m all about these niche topics in data regulation; I think they are where the rubber really hits the road on AI ethics. Weyl has published the view that consumer internet data should be treated as a form of labor. In my corner of technology policy research, which is quite small in terms of its Internet footprint, we think this is nonsense; it is what Salome Viljoen calls a propertarian view of data. The propertarian view of data is, for many important reasons, wrong. “Markets are never going to fix the AI/data regulation problem. Go read some Katharina Pistor, for christ’s sake!” That’s the main thing I’ve been waiting to say to Weyl, who has a coveted Microsoft Research position, a successful marketing strategy for his intellectual brand, perfect institutional pedigree, and so on.

Which I mean to his credit: he’s a successful public intellectual. This is why I was surprised to see him tweeting about rationalist subculture, which is not, to me, a legitimate intellectual topic, sanctioned as part of the AI/tech policy/whatever research track. My experiences with rationalism have all been quite personal and para-academic. It is therefore something of a context collapse — the social media induced bridging of social spheres — for me personally. (Marwick and Boyd, 2011; Davis and Jurgenson, 2014)


Context collapse is what it’s all about, isn’t it? One of the controversies around the NYT piece about SSC was that the NYT reporter, Metz, was going to reveal the pseudonymous author of SSC, “Scott Alexander”, as the real person Scott Siskind. This was a “doxxing”, though the connection was easy to make for anybody looking for it. Nevertheless, Siskind initially had a strong personal sense of his own anonymity as a writer. Siskind is, professionally, a psychotherapist, which is a profession with very strong norms around confidentiality and its therapeutic importance. Metz’s article is about this. It is also about race and gender — in particular the ways in which there is a therapeutic need for a space in which to discuss race and gender, as well as other topics, seriously, but also freely, without the major social and professional consequences that have come to be associated with it.

Smart people are writing about this and despite being a “privacy scholar”, I’m not sure I can say much that is smarter. Will Wilkinson’s piece is especially illuminating. His is a defense of journalism and a condemnation of what, I’ve now read, was an angry Internet mob that went after Metz in response to this SSC doxxing. It is also an unpacking of Siskind’s motivations based on his writing, a diagnosis of sorts. Spiers has a related analysis. A theme of these arguments against SSC and the rationalists is, “You weren’t acting so rationally this time, were you?”

I get it. The rationalists are, by presenting themselves at times as smarter-than-thou, asking for this treatment. Certainly in elite intellectual circles, the idea that participation in a web forum should give you advanced powers of reason that are as close as we will ever get to magic is pretty laughable. What I think I can say from personal experience, though, is that elite intellectuals seriously misunderstand popular rationalism if they think that it’s about them. Rather, popular rationalism is a non-elite social movement that’s often for people with non-elite backgrounds and problems. Their “smarter-than-thou” was originally really directed at other non-elite cultural forms, such as Christianity (!). I think this is widely missed.

I say this with some anecdata. I lived in Berkeley for a while in graduate school and was close with some people who were bona fide rationalists. I had an undergraduate background from an Ivy League university in cognitive science and was familiar with the heuristics and biases program and Bayesian reasoning. I had a professional background working in software. I should have fit in, right? So I volunteered twice at the local workshop put on by the Center for Applied Rationality to see what it was all about.

I noticed, as one does, that it’s mostly white men at these workshops and when asked for feedback I pointed this out. Eager to make use of the sociological skills I was learning in grad school, I pointed out that if they did not bring in more diversity early on, then because of homophily effects they may never reach a diverse audience.

At the time, the leadership of CFAR told me something quite interesting. It was that they had looked at their organizational goals and capacities and decided that where they could make the most impact was on teaching the skills of rational thought to smart people from, say, the rural midwest U.S.A. who would otherwise not get the exposure to this kind of thinking or what I would call a community of practice around it. Many of these people (much like Elizabeth Spiers, according to her piece) come from conservative and cloistered Christian backgrounds. Yudkowsky’s Harry Potter and the Methods of Rationality is their first exposure to Bayesian reasoning. They are often the best math students in their homogeneous home town, and finding their way into an engineering job in California is a big deal, as is finding a community that fills an analogous role to organized religion but does not seem so intellectually backwards. I don’t think it’s accidental the Julia Galef, who founded CFAR, started out in intellectual atheist circles before becoming a leader in rationalism. Providing an alternative culture to Christianity is largely what popular rationalism is about.

From this perspective, it makes more sense why Siskind has been able to cultivate a following by discussing cultural issues from a centrist and “altruistic” perspective. There’s a population in the U.S. that grew up in conservative Christian settings, now makes a living in a booming technology sector whose intellectual principles are at odds with those of their upbringing, is trying to “do the right thing” and, being detached from political institutions or power, turns to the question of philanthropy, codified into Effective Altruism. This population is largely comprised of white guys who may truly be upwardly mobile because they are, relative to where they came from, good at math. The world they live in, which revolves around AI, is nothing like the one they grew up in. These same people are regularly confronted by a different ideology, a form of left wing progressivism, which denies their merit, resents their success, and considers them a problem, responsible for the very AI harms that they themselves are committed to solving. If I were one of them, I, too, would want to be part of a therapeutic community where I could speak freely about what was going on.


This is several degrees removed from libertarian politics, which I now see as the line connecting in Weyl. Wilkinson makes the compelling case for contemporary rationalism originating in Tyler Cohen’s libertarian economist blogging and the intellectual environment at George Mason University. This spins out Robin Hanson’s Overcoming Bias blog, which spins out Yudkowsky’s LessWrong forum, which is where popular rationalism incubated. Weyl is an east coast libertarian public intellectual and it makes sense that he would engage other libertarian public intellectuals. I don’t think he’s going to get very far picking fights on the Internet with Yudkowsky, but I could be wrong.

Weyl’s engagement with the rationalist community does highlight for me two other missing elements in the story-as-told-so-far, from my readings on it. I’ve been telling a story partly about geography and migration. I think there’s also an element of shifting centers of cultural dominance. Nothing made me realize that I am a parochial New Yorker like living in California for five years. Rationalism remains weird to me because it is, today, a connection between Oxford utilitarian philosophers, the Silicon Valley nouveau riche, and to some extent Washington, D.C.-based libertarians. That is a wave of culture bypassing the historical intellectual centers of northeast U.S. Ivy League universities which for much of America’s history dominated U.S. politics.

To some extent, this speaks to the significance of the NYT story as well. It was not the first popular article about rationalists; Metz mentions the TechCrunch article about neoreactionaries (I’ll get to that) but not the Sam Frank article in Harpers, “Come With Us If You Want To Live” (2015), which is more ethnographic in its approach. I think it’s a better article. But the NYT has a different audience and a different standard for relevance. NYT is not an intellectual literary magazine. It is the voice of New York City, once the Center of the Universe. New York City’s perspective is particularly weighty, relevant, objective, powerful because of its historic role as a global financial center and marketing center. When the NYT notices something, for a great many people, it becomes real. NYT is at the center of a large public sphere with a specific geographic locus, in a way that some blogs and web forums are not. So whether it was justified or not, Metz’s doxing of Siskind was a significant shift in what information was public, and to whom. Part of its significance is that it was an assertion of cultural power by an institution tied to old money in New York City over a beloved institution of new money in Silicon Valley. In Bourdieusian terms, the article shifted around social and cultural capital in a big way. Siskind was forced to make a trade by an institution more powerful than him. There is a violence to that.


This force of institutional power is perhaps the other missing element in this story. Wilkinson and Frank’s pieces remind me: this is about libertarianism. Weyl’s piece against technocracy is also about libertarianism, or maybe just liberalism. Weyl is arguing that rationalists, as he understands them, are libertarians but not liberals. A “technocrat” is somebody who wants to replace democratic governance mechanisms, which depend on pluralistic discourse, with an expert-designed mechanism. Isn’t this what Silicon Valley does? Build Facebook and act like it’s a nation? Weyl, in my reading, wants an engaged pluralistic public sphere. He is, he reveals later, really arguing with himself, reforming his own views. He was an economist, coming up with mathematical mechanisms to improve social systems through “radical exchange”; now he is a public intellectual who has taken a cultural turn and called AI an “ideology”.

On the other end of the spectrum, there are people that actually would, if they could, build an artificial island and rule it via computers like little lords. I guess Peter Thiel, who plays a somewhat arch-villain role in this story, is like this. Thiel does not like elite higher education and the way it reproduces the ideological conditions for a pluralistic democracy. This is presumably why he backs Curtis Yarvin, the “neoreactionary” writer and “Dark Enlightenment” thinker. Metz goes into detail about this, and traces a connection between Yarvin and SSC; there’s leaked emails about it. To some people, this is the real story. Why? Because neoreaction is racist and sexist. This, not political theory, I promise you, is what is driving the traffic. It’s amazing Metz didn’t use the phrase “red pill” or “alt-right” because that’s definitely the narrative being extended here. With Trump out of office and Amazon shutting down Parler’s cloud computing, we don’t need to worry about the QAnon nutcases (which were, if I’m following correctly, a creation of the Mercers) but what about the right wing elements in the globally powerful tech sector, because… AI ethics! There’s no escape.

Slate Star Codex was a window into the Silicon Valley psyche. There are good reasons to try and understand that psyche, because the decisions made by tech companies and the people who run them eventually affect millions.

And Silicon Valley, a community of iconoclasts, is struggling to decide what’s off limits for all of us.

At Twitter and Facebook, leaders were reluctant to remove words from their platforms — even when those words were untrue or could lead to violence. At some A.I. labs, they release products — including facial recognition systems, digital assistants and chatbots — even while knowing they can be biased against women and people of color, and sometimes spew hateful speech.

Why hold anything back? That was often the answer a Rationalist would arrive at.

Metz’s article has come under a lot of criticism for drawing sweeping thematic links between SSC, neoreaction, and Silicon Valley with very little evidence. Noah Smith’s analysis shows how weak this connection actually is. Silicon Valley is, by the numbers, mostly left-wing, and mostly, by the numbers, not reading rationalist blogs. Thiel, and maybe Musk, are noteworthy exceptions, not the general trend. What does any of this have to do with, say, Zuckerburg? Not much.

The trouble is that if the people in Silicon Valley are left-wing, then there’s nobody to blame for racist and sexist AI. Where could racism and sexism in AI possibly come from, if not some collective “psyche” of the technologists? Better, more progressive leaders in Silicon Valley, the logic goes, would lead to better social outcomes. Pluralistic liberalism and proper demographic representation would, if not for the likes of bad apples like Thiel, steer the AI Labs and the big tech companies that use their products towards equitability and justice.

I want to be clear: I think that affirmative action for under-represented minorities (URMs) in the tech sector is a wonderful thing, and that improving corporate practices around their mentorship, etc. is a cause worth fighting for. I’m not knocking any of that. But I think the idea that this alone will solve the problems of “AI ethics” is a liberal or libertarian fantasy. This is because assuming that the actions of a corporation will have the politics of its employees is a form of ecological fallacy. Corporations do not work for their employees; they work, legally and out of fiduciary duty, to their shareholders. And the AI operated at the grand social scales that we are talking about are not controlled by any one person; they are created and operated corporately.

In my view, what Weyl (who I probably agree with more than I don’t), the earlier libertarian bloggers like Hanson, the AI X-risk folks like Yudkowsky, and the popular rationalist movement all get wrong is the way institutional power necessarily exceeds that of individuals in part because and through of “artificial intelligence”, but also through older institutions that distribute economic and social capital. The “public sphere” is not a flat or radical “marketplace of ideas”; it is an ecology of institutions like the New York Times, playing on ideological receptiveness grounded in religious and economic habitus.


Jeffrey Friedman is a dedicated intellectual and educator, who for years has been a generous intellectual mentor and facilitator of political thought. Friedman, like Weyl, is a critic of technocracy. In an early intellectual encounter that was very formative for me, he invited me to write a book review of Philip Tetlock Expert Political Judgment for his journal. In 2007, it was my first academic publication. The writing is embarrassing and I’m glad it is behind a paywall. In the article, I argue against Friedman and for technocracy based on the use of artificial intelligence. I have been in friendly disagreement on this point with Friedman ever since.

The line of reasoning from Friedman’s book, Power without Knowledge: A Critique of Technocracy (2019), that I find most interesting is about the predictability of individuals. Is it possible for a technocrat to predict society? This question has been posed by many different social theorists. I believe Friedman is unique in suggesting that (a) individuals cannot be predicted by a technocrat because (b) of how individual behavior is determined by their ideas, which are sourced so variously and combined in such complex ways that they cannot be captured by an external or generalizing observer. The unpredictability of society is an objective obstacle to the social scientific prediction that is required for technocratic policies to function effectively and as intended. Instead of a technocracy, Friedman advocates for an “Exitocracy”, based on the idea of Exit from Hirschman, which prioritizes a robust private sector in which citizens can experiment and find happiness, over technocratic or what others might call paternalist public policy. Some of the attractiveness of this model is that it depends on minimal assumptions about the rationality of agents, and especially the agency of technocrats, but still achieves satisficing results. Friedman’s exitocracy is, he argues, a ‘judicious’ technocracy, calibrated to realistic levels of expertise and public ignorance. In allowing for redistribution and centralized governance of some public goods, Friedman’s exitocracy stands as an alternative to more radically libertarian “Exit”-oriented proposals such as those of Srinivasan, which have been associated with Silicon Valley and, by dubious extension, the rationalist community.

At this time, I continue to disagree with Friedman. Academics are particularly unpredictable with respect to their ideas, especially if they do not value prestige or fame as much as their other colleagues. But most people are products of their background, or their habits, or their employers, or their social circles, or their socially structured lived experience. Institutions can predict and control people, largely by offering economic incentives. This predictability is what has made AI effective commercially — in uses by advertisers, for example — and it was what makes centralize public policy, or technocracy, possible.

But I could be wrong about this point. It is an empirical question how and under what conditions people’s beliefs and actions are unpredictable given background conditions. This amount of variability, which we might consider a kind of “freedom”, is a condition for whether technocracy, and AI, is viable or not. Are we free?

References

Davis, J. L., & Jurgenson, N. (2014). Context collapse: Theorizing context collusions and collisions. Information, communication & society17(4), 476-485.

Friedman, J. (2019). Power without knowledge: a critique of technocracy. Oxford University Press.

Kollman, K., Miller, J. H., & Page, S. E. (1997). Political institutions and sorting in a Tiebout model. The American Economic Review, 977-992.

Marwick, A. E., & Boyd, D. (2011). I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New media & society13(1), 114-133.

Pistor, K. (2020). Rule by data: The end of markets?. Law & Contemp. Probs.83, 101.

Tao, Y., & Varshney, K. R. (2021). Insiders and Outsiders in Research on Machine Learning and Society. arXiv preprint arXiv:2102.02279.

We need a theory of collective agency to guide data intermediary design

Last week Jake Goldenfein and I presented some work-in-progress to the Centre for Artificial Intelligence and Digital Ethics (CAIDE) at the University of Melbourne. The title of the event was “Data science and the need for collective law and ethics”; perhaps masked by that title is the shift we’re taking to dive into the problem of data intermediaries. I wanted to write a bit about how we’re thinking about these issues.

This work builds on our work “Data Science and the Decline of Liberal Law and Ethics“, which was accepted by a conference that was then canceled due to COVID-19. In retrospect, it’s perhaps for the best that the conference was canceled. The “decline of liberalism” theme fit the political moment when we wrote the piece, when Trump and Sanders were contenders for the presidency of the U.S, and authoritarian regimes appeared to be providing a new paradigm for governance. Now, Biden is the victor and it doesn’t look like liberalism is going anywhere. We must suppose that our project will take place in a (neo)liberal context.

Our argument in that work was that many of the ideas animating the (especially Anglophone) liberalism we see in the U.S., the U.K., and Australia legal systems have been inadequate to meaningfully regulate artificial intelligence. This is because liberalism imagines a society of rational individuals appropriating private property through exchanges on a public market and acting autonomously, whereas today we have a wide range of agents with varying levels of bounded rationality, many of which are “artificial” in Herbert Simon’s sense of being computer-enabled firms, tied together in networks of control, not least of these being privately owned markets (the platforms). Essentially, loopholes in liberalism have allowed a quite different form of sociotechnical ordering to emerge because that political theory did not take into account a number of rather recently discovered scientific truths about information, computing, and control. Our project is to tackle this disconnect between theory and actuality, and to try to discover what’s next in terms of a properly cybernetic political theory that advances the goal of human emancipation.

Picking up where our first paper left off, this has gotten us looking at data intermediaries. This is an area where there has been a lot of work! We were particularly inspired by Mozilla’s Data Futures review of different forms of data intermediary institutions, including data coops, data trusts, data marketplaces, and so on. There is a wide range of ongoing experiments with alternative forms of “data stewardship” or “data governance”.

Our approach has been to try to frame and narrow down the options based on normative principles, legal options, and technical expertise. Rather than asking empirically what forms of data governance have been attempted, we are wondering: what ought the goals of a data intermediary be, given the facts about cybernetic agency in the world we live? How could such an institution accomplish what has been lost by the inadequacies of liberalism?

Our thinking has led us to the position that what has prevented liberalism from regulating the digital economy is its emphasis on individual autonomy. We draw on the new consensus in privacy scholarship that individual “notice and choice” is an ineffective way to guarantee consumer protection in the digital economy. Not only are bounded rationality constraints on consumers preventing them from understanding what they are agreeing to, but also the ability of firms to control consumer’s choice architecture has dwarfed the meaningfulness of whatever rationality individuals do have. Meanwhile, it is now well understood (perhaps most recently by Pistor (2020)) that personal data is valuable only when it is cleaned and aggregated. This makes the locus of economic agency around personal data necessarily a collective one.

This line of inquiry leads us to a deep question to which we do not yet have a ready answer, which is “What is collective emancipation in the paradigm of control?” Meaning, given what we know about the “sciences of the artificial”, control theory, theory of computation and information, etc., with all of its challenges to the historical idea of the autonomous liberal agent, what does it mean for a collective of individuals to be free and autonomous?

We got a lot of good feedback on our talk, especially from discussant Seth Lazar, who pointed out that there are many communitarian strands of liberalism that we could look to for normative guides. He mentioned, for example, Elizabeth Anderson’s relational egalitarianism. We asked Seth whether he thought that the kind of institution that guaranteed the collective autonomy of its members would have to be a state, and he pointed out that that was a question of whether or not such a system would be entitled to use coercion.

There’s a lot to do on this project. While it is quite heady and philosophical, I do not think that it is necessarily only an abstract or speculative project. In a recent presentation by Vincent Southerland, he proposed that one solution to the problematic use of algorithms in criminal sentencing would be if “the community” of those advocating for equity in the criminal justice system operated their own automated decision systems. This raises an important question: how could and should a community govern its own a technical systems, in order to support what in Southerland’s case is an abolitionist agenda. I see this as a very aligned project.

There is also a technical component to the problem. Because of economies of scale and the legal climate, more and more computation is moving onto proprietary cloud systems. Most software now is provided “as a service”. It’s unclear what this means for organizations that would try to engage in self-governance, even when these organizations are autonomous state entities such as municipalities. In some conversations, we have considered what modifications of the technical ideas of the “user agent”, security firewalls and local networks, and hybrid cloud infrastructure would enable collective self-governance. This is the pragmatic “how?” that follows our normative “what?” and “why?” question but it is no less important to implementing a prototype solution.

References

Benthall, Sebastian and Goldenfein, Jake, Data Science and the Decline of Liberal Law and Ethics (June 22, 2020). Available at SSRN: https://ssrn.com/abstract=3632577 or http://dx.doi.org/10.2139/ssrn.3632577

Narayanan, A., Toubiana, V., Barocas, S., Nissenbaum, H., & Boneh, D. (2012). A critical look at decentralized personal data architectures. arXiv preprint arXiv:1202.4503.

Pistor, K. (2020). Rule by data: The end of markets?. Law & Contemp. Probs.83, 101.

Sources of the interdisciplinary hierarchy

Lyotard’s 1979 treatise The Postmodern Condition tells a prescient story about the transformation of the university. He discusses two “metanarratives” used for the organization of universities: the German Humboldt model of philosophy as the central discipline, with all other fields of knowledge radiating out from it; and the French model of the university as the basis of education of the modern democratic citizen. Lyotard argues (perhaps speciously) that because of what the late Wittgenstein had to say about the autonomy of language games (there are no facts; there are only social rules) and because of cybernetics (the amalgamation of exact and applied sciences that had been turned so effectively towards control of human and machine), the metanarratives had lost their legitimacy. There was only “legitimation by performativity”, knowledge proving itself by virtue of its (technical) power, and “legitimiation by paralogy”, knowledge legitimizing itself through semantic disruption, creating pools of confusion in which one could still exist though out-of-alignment with prevailing cybernetic logics.

This duality–between cybernetics and paralogy–excludes a middle term identified in Habermas’s 1968 Knowledge and the Structure of Human Interests. Habermas identifies three “human interests” that motivate knowledge: the technical interest (corresponding to cybernetic performativity), the emancipatory interest (perhaps corresponding to the paralogic turn away from cybernetic performativity), and, thirdly, the hermeneutic interest. The latter is the interest in collective understanding that allows for collective understanding. As Habermas’s work matures, this interest emerges as the deliberative, consensual basis of law.

These frameworks for understanding knowledge and the university share an underlying pragmatism. Both Lyotard and Habermas seem to agree about the death of the Humboldt model: knowledge for its own sake is a deceased metanarrative. Knowledge for democratic citizens, the purportedly French model in Lyotard, was knowledge of shared historical narratives and agreement about norms for Habermas. Lyotard was pessimistic about the resilience of these kinds of norms under the pressure of cybernetics. Indeed, this tension between “smart technology” and “rule of law” remains today, expressed in the work of Hildebrandt. The question of whether technical knowledge threatens or delegitimizes legal/hermeneutic knowledge is still with us today.

These intellectual debates are perhaps ultimately about university politics and academic disciplines. If they are truly _ultimately_ about that, that marks their limitation. For what the pragmatist orientation towards knowledge implies is that knowledge does not exist for its own sake, but rather, in most cases, for its application. Philosophers can therefore only achieve so much by appealing to generalized interests. All real applications are contextualized.

Two questions unanswered by these sources (at least in what is assuredly this impoverished schematic of their arguments) are:

  • Whence the interests and applications that motivate the university as socially and economically situated?
  • What accounts for the tensions between the technical/performative disciplines and the hermeneutic and emancipatory ones?

In 1979, the same publication year of The Postmodern Condition, Pierre Bourdieu published Distinction: A Social Critique of the Judgement of Taste. While not in itself an epistemology, Bourdeiu’s method and conclusions provide a foundation for later studies of science, journalism, and the university. Bourdieu’s insight is that aesthetic taste–in art, in design, in hobbies, etc.–is a manifestation of socioeconomic class understood in terms of a multidimensional matrix of forms of capital–such as economic wealth, but also social status and prestigue, and social capital in knowledge and skills. Those with lots of wealth and low cultural capital–the nouveau riche–will value expensive, conspicuous consumption. Those with low wealth and high cultural capital–academics, perhaps–will value intricate works that require time and training to understand and so on. But these preferences exist to maintain the social structures of (multiply defined) capital accumulation.

A key figure in Bourdieu’s story is that of the petit bourgeoisie, the transitional middle class that has specialized their labor, created perhaps a small business, but has not accumulated capital in a way that secures them in the situation where they aspire to be. In today’s economy, these might include the entrepreneurs–those who would, by their labor, aspirationally transform themselves from laborers into capitalists. They would do this by the creation of technology–the means of productions, capital. Unlike labor applied directly to the creation of goods and services as commodities, capital technologies, commodified through the institution of intellectual property, have the potential to scale in use well beyond the effort of their creation and, through Schumpeterian disruption, make their creators wealthy enough to change their class position. On the other hand, there are those who prefer the academic lifestyle, who luxuriate in the study of literature and critique. Through the institutions of critical academia, these are also jobs that can be won through the accumulation of, in this case social and cultural, capital. By design, these are fields of knowledge that exist for their own sake. There are also, of course, law and social scientific disciplines that are helpful for the cultural formation of politicians, legislators, and government workers of various kinds.

Viewed in this way, we can start to see “human interests” not merely as transcendental features the general human condition, but rather as the expression of class and capital interests. This makes sense given the practical reality of universities getting most of their income through tuition. Students attend universities in order to prepare themselves for careers. The promise of a professional career allows universities to charge higher tuition. Where in the upper classes people choose to compete on intangible cultural capital rather than economic capital, universities maintain specialized disciplinary tracks in the humanities.

Notably, the emancipatory role of the humanities, lauded by Habermas, subtly lampooned (parhaps) by Lyotard, is in other works more closely connected to leisure. As early as 1947, Horkheimer, in Eclipse of Reason, points out that the kind of objective reason he sees as essential to the moral grounding of society that has been otherwise derailed by capitalism relies on leisure time that this a difficult class attainment. In perhaps cynical Bourdieusian terms, the ability to reflect on the world and decide, beyond the restrictions of material demands, on an independent or transcendent system of values is itself a form of cultural accumulation of the most rarified kind. However, as this form of cultural attainment is not connected directly to any means of production, it is perhaps a mystery what grounds it pragmatically.

There’s an answer. It’s philanthropy. The arts and humanities, the idealistic independent policy think tanks, and so on, are funded by those who, having accumulated economic capital and the capacity for leisurely thinking about the potential for a better word, have allocated some portion of their wealth towards “causes”. The competition for legitimacy between and among philanthropic causes is today a major site of politics and ideology. Most obviously, political parties and candidacy run on donations, which is in a sense a form of values-driven philanthropy. The appropriation of state funds, or not, for particular causes becomes a battlefield of all forms of capital at the end of the day

This is all understandable from the perspective that is now truly at the center of the modern university: the perspective of business administration. Ever since Herbert Simon, it has been widely known that the managerialist discipline and computational and cybernetic sciences are closely aligned. The economic sociology of Bourdieu is notable in that it is a successor to the sociology of Marx, but also a successor to the phenomenological approach of Kant, and yet is ultimately consistent with the managerialist view of institutions relying on skilled capital management. Disciplines or sub-disciplines that are peripheral to these core skillsets by virtue of their position in the network of capital flows are marginal by definition.

This accounts for much of interdisciplinary politics and grievance. The social structures described here account for the teleological dependency structure of different forms of knowledge: what it is possible to motivate, and with what. To the extent that a discipline as a matter of methodological commitment is unable to account for this social structure, it will be dependent on its own ability to perpetuate itself autonomously though the stupefication of its students.

There is another form of disciplinary dependency worth mentioning. It cuts the other way: it is the dependency that arises from the infrastructural needs of the knowledge institutions. This instrumental dependency is where this line of reasoning connects with Ihde’s instrumental realism as a philosophy of science. Here, too, there are disciplines that are blind to themselves. To the extent that a discipline is unable to account for the scientific advances necessary for its own work, it survives through the heroics of performative contradiction. There may be cases where an institution has developed enough teleological autonomy to reject the knowledge behind its own instrumentation, but in these cases we must be tempted to consider the knowledge claims of the former to be specious and pretensious. What purpose does fashionable nonsense have, if it rejects the authority of those that it depends on materially? “Those” here referring to those classes that embody the relevant infrastructural knowledge.

The answer is perhaps best addressed using the Bourdieusian insights already addressed: an autonomous field of discourse that denies its own infrastructure is a cultural market designed to establish a distinct form of capital, an expression of leisure. The rejection of performativity, or tenuous and ambiguous connection to it, becomes a class marker; synecdochal with leisure itself, which can then be held up as an esteemable goal. Through Lyotard’s analysis, we can see how a field so constructed might be successful through the rhetorical power of its own paralogic.

What has been lost, through this process, is the metanarrative of the university, most especially of the university as an anchor of knowledge in itself. The pragmatist cybernetic knowledge orientation entails that the university is subsumed to wider systems of capital flows, and the only true guarantee of its autonomy is philanthropic endowment which might perpetuate its ability to develop a form of capital that serves its own sake.

A philosophical puzzle: morality with complex rationality

There’s a recurring philosophical puzzle that keeps coming up as one drills into the foundational issues at the heart of technology policy. The more complete articulation of it that I know of is in a draft I’ve written with Jake Goldenfein whose publication was COVID delayed. But here is an abbreviated version of the philosophical problem, distilled perhaps from the tech policy context.

For some reason it all comes back to Kant. The categorical imperative has two versions that are supposed to imply each other:

  • Follow rules that would be agreed on as universal by rational beings.
  • Treat others as ends and not means.

This is elegant and worked quite well while the definitions of ‘rationality’ in play were simple enough that Man could stand at the top of the hierarchy.

Kant is outdated now of course but we can see the influence of this theory in Rawls’s account of liberal ethics (the ‘veil of ignorance’ being a proxy for the reasoning being who has transcended their empirical body), in Habermas’s account of democracy (communicative rationality involving the setting aside of individual interests), and so on. Social contract theories are more or less along these lines. This paradigm is still more or less the gold standard.

There’s a few serious challenges to this moral paradigm. They both relate to how the original model of rationality that it is based on is perhaps naive or so rarefied to be unrealistic. What happens if you deny that people are rational in any disinterested sense, or allow for different levels of rationality? It all breaks down.

On the one hand, there’s various forms of egoism. Sloterdijk argues that Nietzsche stood out partly because he argued for an ethics of self-advancement, which rejected deontological duty. Scandalous. The contemporary equivalent is the reputation of Ayn Rand and those inspired by her. The general idea here is the rejection of social contract. This is frustrating to those who see the social contract as serious and valuable. A key feature of this view is that reason is not, as it is for Kant, disinterested. Rather, it is self-interested. It’s instrumental reason with attendant Humean passions to steer it. The passions need not be too intellectually refined. Romanticism, blah blah.

On the other hand, the 20th century discovers scientifically the idea of bounded rationality. Herbert Simon is the pivotal figure here. Individuals, being bounded, form organizations to transcend their limits. Simon is the grand theorist of managerialism. As far as I know, Simon’s theories are amoral, strictly about the execution of instrumental reason.

Nevertheless, Simon poses a challenge to the universalist paradigm because he reveals the inadequacy of individual humans to self-determine anything of significance. It’s humbling; it also threatens the anthropocentrism that provided the grounds for humanity’s mutual self-respect.

So where does one go from here?

It’s a tough question. Some spitballing:

  • One option is to relocate the philosophical subject from the armchair (Kant) to the public sphere (Habermas) into a new kind of institution that was better equipped to support their cogitation about norms. A public sphere equipped with Bloomberg terminals? But then who provides the terminals? And what about actually existing disparities of access?
    • One implication of this option, following Habermas, is that the communications within it, which would have to include data collection and the application of machine learning, would be disciplined in ways that would prevent defections.
    • Another implication, which is the most difficult one, is that the institution that supports this kind of reasoning would have to acknowledge different roles. These roles would constitute each other relationally–there would need to be a division of labor. But those roles would need to each be able to legitimize their participation on the whole and trust the overall process. This seems most difficult to theorize let alone execute.
  • A different option, sort of the unfinished Nietzschean project, is to develop the individual’s choice to defect into something more magnanimous. Simone de Beauvoir’s widely underrated Ethics of Ambiguity is perhaps the best accomplishment along these lines. The individual, once they overcome their own solipsism and consider their true self-interests at an existential level, come to understand how the success of their projects depends on society because society will outlive them. In a way, this point echoes Simon’s in that it begins from an acknowledgment of human finitude. It reasons from there to a theory of how finite human projects can become infinite (achieving the goal of immortality to the one who initiates them) by being sufficiently prosocial.

Either of these approaches might be superior to “liberalism”, which arguably is stuck in the first paradigm (though I suppose there are many liberal theorists who would defend their position). As a thought experiment, I wonder what public policies motivated by either of these positions would look like.

some PLSC 2020 notes: one framing of the managerialism puzzle

PLSC 2020 was quite interesting this year.

There were a number of threads I’d like to follow up on. One of them has to do with managerialism and the ability of the state (U.S. in this context) to regulate industry.

I need to do some reading to fill some gaps in my understanding, but this is how I understand the puzzle so far.

Suppose the state wants to regulate industry. Congress passes a bill creating an agency with regulatory power with some broadly legislated mandate. The agency comes up with regulations. Businesses then implement policies to comply with the regulation. That’s how it’s supposed to go.

But in practice, there is a lot of translational work being done here. The broadly legislated mandate will be in a language that can get passed by Congress. It delegates elaboration on the specifics to the expert regulators in the agency; these regulators might be lawyers. But when the corporate bosses get the regulations (maybe from their policy staff, also lawyers?) they begin to work with it in a “managerialist” way. This means, I gather, that they manage the transition towards compliance, but in a way that minimizes the costs of compliance. If they can comply without adhering to the purpose of the regulation–which might be ever-so-clear to the lawyers who dreamed it up–so be it.

This seems all quite obvious. Of course it would work this way. If I gather correctly at this point (and maybe I don’t), the managerialist problem is: because of the translational work going on between legislate intent through to administrative regulation into corporate policy into implementation, there’s a lot of potential to have information “lost in translation”, and this information loss works to the advantage of the regulated corporation, because it is using all that lost regulatory bandwidth to its advantage.

We should teach economic history (of data) as “data science ethics”.

I’ve recently come across an interesting paper published at Scipy 2019, Dusen et al.’s “Accelerating the Advancement of Data Science Education” (2019) (link). It summarizes recent trends in data science education, as modeled by UC Berkeley’s Division of Data Science, which is now the Division of Computing, Data Science, and Society (CDSS). This is a striking piece to me as I worked at Berkeley on its data science capabilities several years ago and continue to be fascinated by my alma mater, the School of Information, as it navigates being part of CDSS.

Among other interesting points in the article, two are particularly noteworthy to me. The first is that the integration of data science into the social sciences appears to have continued apace. The article mentions that data science’s integration into the social science has continued apace. Economics, in particular, is well represented and supported in the extended data science curriculum.

The other interesting point is the emphasis on data science ethics as an essential pillar of the educational program. The writing in this piece is consistent with what I’ve come to expect from Berkeley on this topic, and I believe it’s indicative of broad trends in academia.

The authors of this piece are explicit about their “theory of change”. What is data science ethics education supposed to accomplish?

Including training in ethical considerations at all levels of society and all steps of the data science workflow in undergraduate data science curricula could play an important role in stimulating change in industry as our students enter the workforce, perhaps encouraging companies to add ethical standards to their mission statements or to hire chief ethics officers to oversee not only day-to-day operations but also the larger social consequences of their work.

The theory of change articulated by the paper is that industry will change if ethically educated students enter the workforce. They see a future where companies change their mission statements in accord with what has been taught in data science ethics courses, or hire oversight officials.

This is, it must be noted, broadly speculative, and implies that the leadership of the firms who hire these Berkeley grads will be responsive to their employees. However, unlike in some countries in Europe, the United States does not give employees a lot of say in the governance of firms. Technology firms, such as Amazon and Google, have recently proven to be rather unfriendly to employees that attempt to organize in support of “ethics”. This is for highly conventional reasons: the management of these firms tends to be oriented towards the goal of maximizing shareholder profits, and having organized employees advocating for ethical issues that interfere with business is an obstacle to that goal.

This would be understood plainly if economics, or economic history, was taught as part of “data science ethics”. But it’s not for some reason. Information economics, which would presumably be where one would start to investigate the way incentives drive data science institutions, is perhaps too complex to be included in the essential undergraduate curriculum, despite its being perhaps critical to understanding the “data intensive” social world we all live in now.

We forget today, often, that the original economists (Adam Smith, Alfred Marshall, etc.) were all originally moral philosophers. Economics has begun to be seen as a field designed to be in instrumental support of business practice or ideology rather than an investigation into the ethical consequences of social and material structure. That’s too bad.

Instead of teaching economic history, which would be a great way of showing students the ethical implications of technology, instead Berkeley is teaching Science and Technology Studies (STS) and algorithmic fairness! I’ll quote at length:

A recent trend in incorporating such ethical practices includes
incorporating anti-bias algorithms in the workplace. Starting from
the beginning of their undergraduate education, UC Berkeley students can take History 184D: Introduction to Science, Technology, and Society: Human Contexts and Ethics of Data, which covers the implications of computing, such as algorithmic bias. Additionally, students can take Computer Science 294: Fairness in Machine Learning, which spends a semester in resisting racial, political, and physical discrimination. Faculty have also come together to create the Algorithmic Fairness and Opacity Working Group at Berkeley’s School of Information that brainstorms methods to improve algorithms’ fairness, interpretability, and accountability. Implementing such courses and interdisciplinary groups is key to start the conversation within academic institutions, so students
can mitigate such algorithmic bias when they work in industry or
academia post-graduation.


Databases and algorithms are socio-technical objects; they emerge and evolve in tandem with the societies in which they operate [Latour90]. Understanding data science in this way and recognizing its social implications requires a different kind of critical thinking that is taught in data science courses. Issues such as computational agency [Tufekci15], the politics of data classification and statistical inference [Bowker08], [Desrosieres11], and the perpetuation of social injustice through algorithmic decision making [Eubanks19], [Noble18], [ONeil18] are well known to scholars in the interdisciplinary field of science and technology
studies (STS), who should be invited to participate in the development of data science curricula. STS or other courses in the social sciences and humanities dealing specifically with topics related to data science may be included in data science programs.

This is all very typical. The authors are correct that algorithmic fairness and STS have been trendy ways of teaching data science ethics. It is perhaps too cynical to say that these are trendy approaches to “data science ethics” because they are the data science ethics that Microsoft will pay for. Let that slip as a joke.

However, it is unfortunate if students have no better intellectual equipment for dealing with “data science ethics” than this. Algorithmic fairness is a fascinating field of study with many interesting technical results. However, as has been broadly noted by STS scholars, among others, the successful use of “algorithmic fairness” technology depends on the social context in which it is deployed. Often, “fairness” is achieved through greater scientific and technical integrity: for example, properly deducing cause and effect rather than lazily applying techniques that find correlation. But the ethical challenges in the workplace are often not technical challenges. They are the challenges of managing the economic incentives of the firm, and how these effect the power structures within the firm. (Metcalf et al., 2019) This is apparently not material that is being taught at Berkeley to data science students.

This more careful look at the social context in which technology is being used is supposed to be what STS is teaching. But, all too often, this is not what it’s doing. I’ve written elsewhere why STS is not the solution to “tech ethics”. Part of (e.g. Latourian) STS training is a methodological, if not intellectual, relativistic skepticism about science and technology itself (Carroll, 2006). As a consequence, it requires, of itself, to be a humanistic or anthropological field, using “interpretivist” methods, with weak claims to generalizability. It is, first and foremost, an academic field, not an applied one. The purpose of STS is to generate fascinating critiques.

There are many other social sciences that have different aims, such as the aim of building consensus around what social and economic conditions are in order to motivate political change. These social sciences have ethical import. But they are built around a different theory of change. They are aimed at the student as a citizen in a democracy, not as an employee at a company. And while I don’t underestimate the challenges of advocating for designing education to empower students as public citizens in this economic climate, it must nevertheless be acknowledge, as an ethical matter, that a “data science ethics” curriculum that does not address the politics behind those difficulties will be an anemic one, at best.

There is a productive way forward. It requires, however, interdisciplinary thinking that may be uncomfortable or, in the end, impossible for many established institutions. If students are taught a properly historicized and politically substantive “data science ethics”, not in the mode of an STS-based skepticism about technology and science, but rather as economic history that is informed by data science (computational and inferential thinking) as an intellectual foundation, then ethical considerations would need not be relegated to a hopeful afterthought invested in a theory of corporate change that is ultimately a fantasy. Rather, it would put “data science ethics” on a scientific foundation and help civic education justify itself as a matter of social fact.

Addendum: Since the social sciences aren’t doing this work, it looks like some computer scientists are doing it instead. This report by Narayanan provides a recent economic history of “dark patterns” since the 1970’s–an example of how historical research can put “data science ethics” in context.

References

Carroll, P. (2006). Science of Science and Reflexivity. Social Forces85(1), 583-585.

Metcalf, J., & Moss, E. (2019). Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics. Social Research: An International Quarterly86(2), 449-476.

Van Dusen, E., Suen, A., Liang, A., & Bhatnagar, A. (2019). Accelerating the Advancement of Data Science Education. Proceedings of the 18th Python in Science Conference (SciPy 2019)

Internet service providers are utilities

On Sunday, New York State is closing all non-essential brick-and-mortar businesses and ordering all workforce who are able to work from home. Zoom meetings from home are now the norm for people working for both the private sector and government.

One might reasonably want to know whether the internet service providers (ISP) are operating normally during this period. I had occasion to call up Optimum yesterday and ask. I was told, very helpfully, “Were doing business as usual because we are like a utility.”

It’s quite clear that the present humane and responsible approach to COVID-19 depends on broad and uninterrupted access to the Internet to homes. The government and businesses would cease to function without it. Zoom meetings are performing the role that simple audio telephony once did. And executive governments are recognizing this as they use their emergency powers.

There has been a strain of “technology policy” thought that some parts of “the tech sector” should be regulated as utilities. In 2015, the FCC reclassified broadband access as a utility as part of their Net Neutrality decision. In 2018, this position was reversed. This was broadly seen as a win for the telecom companies.

One plausible political consequence of COVID-19 is the reconsideration of the question of whether ISPs are utilities or not. They are.

Notes on Krussell & Smith, 1998 and macroeconomic theory

I’m orienting towards a new field through my work on HARK. A key paper in this field is Krusell and Smith, 1998 “Income and wealth heterogeneity in the macroeconomy.” The learning curve here is quite steep. These are, as usual, my notes as I work with this new material.

Krusell and Smith are approaching the problem of macroeconomic modeling on a broad foundation. Within this paradigm, the economy is imagined as a large collection of people/households/consumers/laborers. These exist at a high level of abstraction and are imagined to be intergenerationally linked. A household might be an immortal dynasty.

There is only one good: capital. Capital works in an interesting way in the model. It is produced every time period by a combination of labor and other capital. It is distributed to the households, apportioned as both a return on household capital and as a wage for labor. It is also consumed each period, for the utility of the households. So all the capital that exists does so because it was created by labor in a prior period, but then saved from immediate consumption, then reinvested.

In other words, capital in this case is essentially money. All other “goods” are abstracted way into this single form of capital. The key thing about money is that it can be saved and reinvested, or consumed for immediate utility.

Households also can labor, when they have a job. There is an unemployment rate and in the model households are uniformly likely to be employed or not, no matter how much money they have. The wage return on labor is determined by an aggregate economic productivity function. There are good and bad economic periods. These are determine exogenously and randomly. There are good times and bad times; employment rates are determined accordingly. One major impetus for saving is insurance for bad times.

The problem raised by Krusell and Smith in this, what they call their ‘baseline model’, is that because all households are the same, the equilibrium distribution of wealth is far too even compared with realistic data. It’s more normally distributed than log-normally distributed. This is implicitly a critique at all prior macroeconomics, which had used the “representative agent” assumption. All agents were represented by one agent. So all agents are approximately as wealthy as all others.

Obviously, this is not the case. This work was done in the late 90’s, when the topic of wealth inequality was not nearly as front-and-center as it is in, say, today’s election cycle. It’s interesting that one reason why it might have not been front and center was because prior to 1998, mainstream macroeconomic theory didn’t have an account of how there could be such inequality.

The Krusell-Smith model’s explanation for inequality is, it must be said, a politically conservative one. They introduce minute differences in utility discount factor. The discount factor is how much you discount future utility compared to today’s utility. If you have a big discount factor, you’re going to want to consume more today. If you have a small discount factor, you’re more willing to save for tomorrow.

Krussell and Smith show that teeny tiny differences in discount factor, even if they are subject to a random walk around a mean with some persistence within households, leads to huge wealth disparities. Their conclusion is that “Poor households are poor because they’ve chosen to be poor”, by not saving more for the future.

I’ve heard, like one does, all kinds of critiques of Economics as an ideological discipline. It’s striking to read a landmark paper in the field with this conclusion. It strikes directly against other mainstream political narratives. For example, there is no accounting of “privilege” or inter-generational transfer of social capital in this model. And while they acknowledge that in other papers there is the discussion of whether having larger amounts of household capital leads to larger rates of return, Kruselll and Smith sidestep this and make it about household saving.

The tools and methods in the paper are quite fascinating. I’m looking forward to more work in this domain.

References

Krusell, P., & Smith, Jr, A. A. (1998). Income and wealth heterogeneity in the macroeconomy. Journal of political Economy106(5), 867-896.

ethnography is not the only social science tool for algorithmic impact assessment

Quickly responding to Selbst, Elish, and Latonero’s “Accountable Algorithmic Futures“, Data and Society’s response to the Algorithmic Accountability Act of 2019…

The bill would empower the FTC to do “automated decision systems impact assessment” (ADSIA) of automated decision-making systems. The article argues that the devil is in the details and that the way the FTC goes about these assessments will determine their effectiveness.

The point of their article, which I found notable, is to assert the appropriate intellectual discipline for these impact assessments.

This is where social science comes in. To effectively implement the regulations, we believe that engagement with empirical inquiry is critical. But unlike the environmental model, we argue that social sciences should be the primary source of information about impact. Ethnographic methods are key to getting the kind of contextual detail, also known as “thick description,” necessary to understand these dimensions of effective regulation.

I want to flag this as weird.

There is an elision here between “the social sciences” and “ethnographic methods” here, as if there were no social sciences that were not ethnographic. And then “thick description” is implied to be the only source of contextual detail that might be relevant to impact assessments.

This is a familiar mantra, but it’s also plainly wrong. There’s many disciplines and methods within “the social sciences” that aren’t ethnographic, and many ways to get at contextual detail that does not involve “thick description”. There is a worthwhile and interesting intellectual question: what are the appropriate methods for algorithmic impact assessment. The authors of this piece assume an answer to that question without argument.