A philosophical puzzle: morality with complex rationality

There’s a recurring philosophical puzzle that keeps coming up as one drills into the foundational issues at the heart of technology policy. The more complete articulation of it that I know of is in a draft I’ve written with Jake Goldenfein whose publication was COVID delayed. But here is an abbreviated version of the philosophical problem, distilled perhaps from the tech policy context.

For some reason it all comes back to Kant. The categorical imperative has two versions that are supposed to imply each other:

  • Follow rules that would be agreed on as universal by rational beings.
  • Treat others as ends and not means.

This is elegant and worked quite well while the definitions of ‘rationality’ in play were simple enough that Man could stand at the top of the hierarchy.

Kant is outdated now of course but we can see the influence of this theory in Rawls’s account of liberal ethics (the ‘veil of ignorance’ being a proxy for the reasoning being who has transcended their empirical body), in Habermas’s account of democracy (communicative rationality involving the setting aside of individual interests), and so on. Social contract theories are more or less along these lines. This paradigm is still more or less the gold standard.

There’s a few serious challenges to this moral paradigm. They both relate to how the original model of rationality that it is based on is perhaps naive or so rarefied to be unrealistic. What happens if you deny that people are rational in any disinterested sense, or allow for different levels of rationality? It all breaks down.

On the one hand, there’s various forms of egoism. Sloterdijk argues that Nietzsche stood out partly because he argued for an ethics of self-advancement, which rejected deontological duty. Scandalous. The contemporary equivalent is the reputation of Ayn Rand and those inspired by her. The general idea here is the rejection of social contract. This is frustrating to those who see the social contract as serious and valuable. A key feature of this view is that reason is not, as it is for Kant, disinterested. Rather, it is self-interested. It’s instrumental reason with attendant Humean passions to steer it. The passions need not be too intellectually refined. Romanticism, blah blah.

On the other hand, the 20th century discovers scientifically the idea of bounded rationality. Herbert Simon is the pivotal figure here. Individuals, being bounded, form organizations to transcend their limits. Simon is the grand theorist of managerialism. As far as I know, Simon’s theories are amoral, strictly about the execution of instrumental reason.

Nevertheless, Simon poses a challenge to the universalist paradigm because he reveals the inadequacy of individual humans to self-determine anything of significance. It’s humbling; it also threatens the anthropocentrism that provided the grounds for humanity’s mutual self-respect.

So where does one go from here?

It’s a tough question. Some spitballing:

  • One option is to relocate the philosophical subject from the armchair (Kant) to the public sphere (Habermas) into a new kind of institution that was better equipped to support their cogitation about norms. A public sphere equipped with Bloomberg terminals? But then who provides the terminals? And what about actually existing disparities of access?
    • One implication of this option, following Habermas, is that the communications within it, which would have to include data collection and the application of machine learning, would be disciplined in ways that would prevent defections.
    • Another implication, which is the most difficult one, is that the institution that supports this kind of reasoning would have to acknowledge different roles. These roles would constitute each other relationally–there would need to be a division of labor. But those roles would need to each be able to legitimize their participation on the whole and trust the overall process. This seems most difficult to theorize let alone execute.
  • A different option, sort of the unfinished Nietzschean project, is to develop the individual’s choice to defect into something more magnanimous. Simone de Beauvoir’s widely underrated Ethics of Ambiguity is perhaps the best accomplishment along these lines. The individual, once they overcome their own solipsism and consider their true self-interests at an existential level, come to understand how the success of their projects depends on society because society will outlive them. In a way, this point echoes Simon’s in that it begins from an acknowledgment of human finitude. It reasons from there to a theory of how finite human projects can become infinite (achieving the goal of immortality to the one who initiates them) by being sufficiently prosocial.

Either of these approaches might be superior to “liberalism”, which arguably is stuck in the first paradigm (though I suppose there are many liberal theorists who would defend their position). As a thought experiment, I wonder what public policies motivated by either of these positions would look like.

Considering the Endless Frontier Act

As a scientist/research engineer, I am pretty excited about the Endless Frontier Act. Nothing would make my life easier than a big new pile of government money for basic research and technological prototypes awarded to people with PhDs. I’m absolutely all for it and applaud the bipartisan coalition moving it forward.

I am somewhat concerned, however, that the motivation for it is the U.S.’s fear of technological inferiority with respect to China. I’ll take the statement of Dr. Reif, President of MIT, at face value, which is probably foolish given the political acumen and moral flexibility of academic administrators. But look at this:

The COVID-19 pandemic is intensifying U.S. concerns about China’s technological strength. Unfortunately, much of the resulting policy debate has centered on ways to limit China’s capacities — when what we need most is a systematic approach to strengthening our own.

Very straightforward. This is what it’s about. Ok. I get it. You have to sell it to the Trump administration. It’s a slam dunk. But then why write this:

The aim of the new directorate is to support fundamental scientific research — with specific goals in mind. This is not about solving incremental technical problems. As one example, in artificial intelligence, the focus would not be on further refining current algorithms, but rather on developing profoundly new approaches that would enable machines to “learn” using much smaller data sets — a fundamental advance that would eliminate the need to access immense data sets, an area where China holds an immense advantage. Success in this work would have a double benefit: seeding economic benefits for the U.S. while reducing the pressure to weaken privacy and civil liberties in pursuit of more “training” data.

This sounds totally dubious to me. There are well known mathematical theorems addressing why learning without data is impossible. The troublesome fact nodded to is that is because of the political economy of China, it is possible to collect “immense data sets”–specifically about people–without civil liberties issues getting in the way. This presumes that the civil liberties problem with AI is the collection of data from data subjects, not the use of machine learning on those data subjects. But even if you could magically learn about data subjects without collecting data from them, you wouldn’t bypass the civil liberties concerns. Rather, you would have a nightmare world where even sans data collection you could act with godly foresight in one’s interventions on polity. This is a weird fantasy and I’m pretty sure the only reason it’s written this way is to sell the idea superficially to uncritical readers trying to reconcile the various narratives around U.S., technology, and foreign policy which are incoherent.

What it’s really all about, of course, is neoliberalism. Dr. Reif is not shy about this:

The bill would also encourage universities to experiment with new ways to help accelerate the process of bringing innovative ideas to the marketplace, either via established companies or startups. At MIT we started The Engine, an independent entity that provides private-sector funding, work space and technical assistance to start-ups that are developing technologies with enormous potential but that require more extensive technical development than typical VCs will fund, from fusion energy to a fast, inexpensive test for COVID-19. Other models may suit other institutions — but the nation needs to encourage many more such efforts, across the country, to reap the full benefits of our federal investment in science.

The implication here is that unless the results of federal investment in the sciences can be privatized, the country does not “reap the full benefits” of the federal investment. This makes the whole idea of a massively expanded federal government program make a lot more sense, politically, because it’s a massive redistribution of funds to, ultimately, Big Tech, who can buy up any successful ‘startups’ without any downside investment risk. And Big Tech now runs the country and has found a way to equate its global market share with national security such that these things are now indistinguishable in any statement of U.S. policy.

This would all be fine I guess if not for the fact that science is different from technology in that science is, cannot be, a private endeavor. The only way science works is if you have an open vetting process that is constantly arguing with itself and forcing the scientists to reproduce results. This global competition for scientific prestige through the conference and journal systems is what “keeps it honest”, which is precisely what allows it to be credible. (Bourdieu, Science of Science, 2004)

A U.S. strategy since basically the end of World War II has been to lead the scientific field, get first mover advantage on any discoveries, and reap the benefit of being the center of education for global scientific talent through foreign tuition fees and talented immigrants. Then it wields technology transfer as a magic wand for development.

Now this is backfiring a bit because Chinese science students are returning to China to be entrepreneurial there and also work for the government. The U.S. is discovering that science, being an open system, allows others countries to free ride and this is perhaps bothersome to it. The current administration seems to hate the idea of anybody free-riding off of something the U.S. is doing, though in the past those spillover effects (another name for them!) would have been the basis of U.S. leadership. You can’t really have it both ways.

So the renaming of the NSF to the NSTF–with “technology” next to “science”–is concerning because “technology” investment need not be openly vetted. Rather, given the emphasis on go-to-market strategy, it suggests that the scientific norms of reproducibility will be secondary to privatization through intellectual property laws, including trade secrecy. The could be quite bad, because without a disinterested community of people vetting the results, what you’ll probably get is a lot of industrially pre-captured bullshit.

Let’s acknowledge for a minute that the success of most startups little to do with the quality of the technology made and much to do with path dependency in network growth, marketing, and regulatory arbitrage. If the government starts a VC fund run by engineers with no upside then that money goes into a bunch of startups which then compete for creative destruction of each other until one, large enough based on its cannibalizing of the others, gets consumed by by FAANG company. It will, in other words, look like Silicon Valley today, which is not terribly efficient at discovery because success is measured by the market. I.e., because (as Dr. Reif suggests) the return on investment is realized as capital accumulation.

This is all pretty backwards if what you’re trying to do is maintain scientific superiority. Scientific progress requires a functional economy of symbolic capital among scientists operating with intellectual integrity that is “for its own sake”, not operating at the behest of market conquest. The spillover effects and freeriding in science is a feature, not a bug, and it’s difficult to reconcile it with a foreign policy that is paranoid about technology transfer to its competitors. Indeed, this is one reason why scientists are often aligned with humanitarian causes, world peace, etc.

Science is a good social structure with a lot going for it. I hope the new bill pours more money into it without messing it up too much.

Managerialism and Habermas

Managerialism is an “in” topic recently in privacy scholarship (Cohen, 2019; Waldman, 2019). In Waldman’s (2019) formulation, the managerialism problem is, roughly: privacy regulations are written with a certain substantive intent, but the for-profit firms that are the object of these regulations interpret them either as a bothersome constraint on otherwise profitable activity, or else as means to the ends of profitability, efficiency, and so on themselves. In other words, the substance of the regulations are subjugated to the substance of the goals of corporate management. Managerialism.

This is exactly what anybody who has worked in a corporate tech environment would expect. The scholarly accomplishment of presenting these bare facts to a legal academic audience is significant because employees of these corporations are most often locked up by strict NDAs. So while the point is obvious, I mean that in the positive sense that it should be taken as an unquestioned background assumption from now on, not that it shouldn’t have been “discovered” by this field in a different way.

As a “critical” observation, it stands. It raises a few questions:

  • Is this a problem?
  • If so, for whom?
  • If so, what can be done about it?

Here the “critical” method reaches, perhaps, its limits. Notoriously, critical scholarship plays on its own ambiguity, dancing between the positions of “criticism”, or finding of actionable fault, and “critique”, a merely descriptive account that is at most suggestive of action. This ambiguity preserves the standing of the critical scholar. They need never be wrong.

Responding to the situation revealed by this criticism requires a differently oriented kind of work.

Habermas and human interests

A striking about the world of policy and legal scholarship in the United States is that nobody is incentivized to teach or read anything written by past generations, however much it synthesized centuries of knowledge, and so nothing ever changes. For example, arguably, Habermas’s Knowledge and Human Interests (KHI), originally published 1972, arguably lays out the epistemological framework we would want to understand the managerialism issue raised by recent scholars. We should expect Habermas to anticipate the problems raised by capitalism in the 21st century because his points are based on a meticulously constructed, historically informed, universalist, transcendental form of analysis. This sort of analysis is not popular in the U.S.; I have my theories about why. But I digress.

A key point from Habermas (who is summing up and reiterating a lot of other work originating, if it’s possible to say any such thing meaningfully, in Max Weber) is that it’s helpful to differentiate between different kinds of knowledge based on the “human interests” that motivate them. In one formulation (the one in KHI), there are three categories:

  1. The technical interest (from techne) in controlling nature, which leads to the “empirical-analytic”, or positivist, sciences. These correspond to fields like engineering and the positivist social sciences.
  2. The pragmatic interest (from praxis), in mutual understanding which would guide collective action and the formation of norms, leads to the “hermeneutic” sciences. These correspond to fields like history and anthropology and other homes of “interpretivist” methods.
  3. The emancipatory interest, in exposing what has been falsely reified as objective fact as socially contingent. This leads to the critical sciences, which I suppose corresponds to what is today media studies.

This is a helpful breakdown, though I should say it’s not Habermas’s “mature” position, which is quite a bit more complicated. However, it is useful for the purposes of this post because it tracks the managerialist situation raised by Waldman so nicely.

I’ll need to elaborate on one more thing before applying this to the managerialist framing, which is to skip past several volumes of Habermas’s ouvre and get to Theory of Communicative Action, volume II, where he gets to the punchline. By now he’s developed the socially pragmatic interest to be the basis for “communicative rationality”, a discursive discipline in which individual interests are excluded and instead a diversely perspectival but nevertheless measured conversation about the way the social world should normatively be ordered. But where is this field in actuality? Money and power, the “steering media”, are always mussing up this conversation in the “public sphere”. So “public discourse” becomes a very poor proxy for communicative action. Rather–and this is the punchline–the actually existing field of communicative rationality, which is establishing substantive norms while nevertheless being “disinterested” with respect to the individual participants, is the law. That’s what the legal scholarship is for.

Applying the Habermasian frame to managerialism

So here’s what I think is going on. Waldman is pointing out that whereas regulations are being written with a kind of socially pragmatic interest in their impact on the imagined field of discursively rational participants as represented by legal scholarship, corporate managers are operating in the technical mode in order to, say, maximized shareholder profits as is their legally mandated fiduciary duty. And so the meaning of the regulation changes. Because words don’t contain meaning but rather take their meaning from the field in which they operate. A privacy policy that once spoke to human dignity gets misheard and speaks instead to inconvenience of compliance costs and a PR department’s assessment of the competitive benefit of user’s trust.

I suppose this is bothersome from the legal perspective because it’s a bummer when something one feels is an important accomplishment of one’s field is misused by another. But I find the professional politics here, as everywhere, a bit dull and petty.

Crucially, the managerialism problem is not dumb and petty–I wouldn’t be writing all this if I thought so. However, the frustrating aspect of this discourse is that because of the absence of philosophical grounding in this debate, it misses what’s at stake. This is unfortunately characteristic of much American legal analysis. It’s missing because when American scholars address this problem, they do so primarily in the descriptive critical mode, one that is empirical and in a sense positivist, but without the interest in control. This critical mode leads to cynicism. It rarely leads to collective action. Something is missing.


A missing piece of the puzzle, one which cannot ever be accomplished through empirical descriptive work, is the establishment of the moral consequence of managerialism which is that human beings are being treated as means and not ends, in contradiction with the Kantian categorical imperative, or something like that. Indeed, it is this flavor of moral maxim that threads its way up through Marx into the Frankfurt School literature with all of its well-trod condemnation of instrumental reason and the socially destructive overreach of private capital. This is, of course, what Habermas was going on about in the first place: the steering media, the technical interest, positivist science, etc. as the enemy of politically legitimate praxis based on the substantive recognition of the needs and rights of all by all.

It would be nice, one taking this hard line would say, if all laws were designed with this kind of morality in mind, and if everybody who followed them did so out of a rationally accepted understanding of their import. That would be a society that respected human dignity.

We don’t have that. Instead, we have managerialism. But we’ve known this for some time. All these critiques are effectively mid 20th century.

So now what?

If the “problem” of managerialism is that when regulations reach the firms that they are meant to regulate their meaning changes into an instrumentalist distortion of the original, one might be tempted to try to combat this tendency with an even more forceful use of hermeneutic discourse or an intense training in the social pragmatic stance such that employees of these companies put up some kind of resistance to the instrumental, managerial mindset. That strategy neglects the very real possibility that those employees that do not embrace the managerial mindset will be fired. Only in the most rarified contexts does discourse propel itself with its one force. We must presume that in the corporate context the dominance of managerialist discourse is in part due to a structural selection effect. Good managers lead the company, are promoted, and so on.

So the angle on this can’t be a discursive battle with the employees of regulated firms. Rather, it has to be about corporate governance. This is incidentally absolutely what bourgeois liberal law ought to be doing, in the sense that it’s law as it applies to capital owners. I wonder how long it will be before privacy scholars begin attending to this topic.


Benthall, S. (2015). Designing networked publics for communicative action. Interface1(1), 3.

Bohman, J., & Rehg, W. (2007). Jürgen habermas.

Cohen, J. E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford University Press, USA.

Habermas, J. (2015). Knowledge and human interests. John Wiley & Sons.

Waldman, A. E. (2019). Privacy Law’s False Promise. Washington University Law Review97(3).

Land value taxation

Henry George’s Progress and Poverty, first published in 1879, is dedicated


The book is best known as an articulation of the idea of a “Single Tax [on land]”, a circa 1900 populist movement to replace all taxes with a single tax on land value. This view influence many later land reform and taxation policies around the world; the modern name for this sort of policy is Land Value Taxation (LVT).

The gist of LVT is that the economic value of owning land comes both from the land itself and improvements built on top of it. The value of the underlying land over time is “unearned”–it does not require labor to maintain, it comes mainly from the artificial monopoly right over its use. This can be taxed and redistributed without distorting incentives in the economy.

Phillip Bess’s 2018 article provides an excellent summary of the economic arguments in favor of LVT. Michel Bauwen’s P2P Foundation article summaries where it has been successfully in place. Henry George was an American, but Georgism has been largely an export. General MacArthur was, it has been said, a Georgist, and this accounts for some of the land reform in Asian countries after World War II. Singapore, which owns and rents all of its land, is organized under roughly Georgist principles.

This policy is neither “left” nor “right”. Wikipedia has sprouted an article on geolibertarianism, a term that to me seems a bit sui generis. The 75th-anniversary edition of Progress and Poverty, published 1953, points out that one of the promises of communism is land reform, but it argues that this is a false promise. Rather, Georgist land reform is enlightened and compatible with market freedoms, etc.

I’ve recently dug up my copy of Progress and Poverty and begun to read it. I’m interested in mining it for ideas. What is most striking about it, to a contemporary reader, is the earnest piety of the author. Henry George was clearly a quite religious man, and wrote his lengthy and thorough political-economic analysis of land ownership out of a sincere belief that he was promoting a new world order which would preserve civilization from collapse under the social pressures of inequality.

some PLSC 2020 notes: one framing of the managerialism puzzle

PLSC 2020 was quite interesting this year.

There were a number of threads I’d like to follow up on. One of them has to do with managerialism and the ability of the state (U.S. in this context) to regulate industry.

I need to do some reading to fill some gaps in my understanding, but this is how I understand the puzzle so far.

Suppose the state wants to regulate industry. Congress passes a bill creating an agency with regulatory power with some broadly legislated mandate. The agency comes up with regulations. Businesses then implement policies to comply with the regulation. That’s how it’s supposed to go.

But in practice, there is a lot of translational work being done here. The broadly legislated mandate will be in a language that can get passed by Congress. It delegates elaboration on the specifics to the expert regulators in the agency; these regulators might be lawyers. But when the corporate bosses get the regulations (maybe from their policy staff, also lawyers?) they begin to work with it in a “managerialist” way. This means, I gather, that they manage the transition towards compliance, but in a way that minimizes the costs of compliance. If they can comply without adhering to the purpose of the regulation–which might be ever-so-clear to the lawyers who dreamed it up–so be it.

This seems all quite obvious. Of course it would work this way. If I gather correctly at this point (and maybe I don’t), the managerialist problem is: because of the translational work going on between legislate intent through to administrative regulation into corporate policy into implementation, there’s a lot of potential to have information “lost in translation”, and this information loss works to the advantage of the regulated corporation, because it is using all that lost regulatory bandwidth to its advantage.

We should teach economic history (of data) as “data science ethics”.

I’ve recently come across an interesting paper published at Scipy 2019, Dusen et al.’s “Accelerating the Advancement of Data Science Education” (2019) (link). It summarizes recent trends in data science education, as modeled by UC Berkeley’s Division of Data Science, which is now the Division of Computing, Data Science, and Society (CDSS). This is a striking piece to me as I worked at Berkeley on its data science capabilities several years ago and continue to be fascinated by my alma mater, the School of Information, as it navigates being part of CDSS.

Among other interesting points in the article, two are particularly noteworthy to me. The first is that the integration of data science into the social sciences appears to have continued apace. The article mentions that data science’s integration into the social science has continued apace. Economics, in particular, is well represented and supported in the extended data science curriculum.

The other interesting point is the emphasis on data science ethics as an essential pillar of the educational program. The writing in this piece is consistent with what I’ve come to expect from Berkeley on this topic, and I believe it’s indicative of broad trends in academia.

The authors of this piece are explicit about their “theory of change”. What is data science ethics education supposed to accomplish?

Including training in ethical considerations at all levels of society and all steps of the data science workflow in undergraduate data science curricula could play an important role in stimulating change in industry as our students enter the workforce, perhaps encouraging companies to add ethical standards to their mission statements or to hire chief ethics officers to oversee not only day-to-day operations but also the larger social consequences of their work.

The theory of change articulated by the paper is that industry will change if ethically educated students enter the workforce. They see a future where companies change their mission statements in accord with what has been taught in data science ethics courses, or hire oversight officials.

This is, it must be noted, broadly speculative, and implies that the leadership of the firms who hire these Berkeley grads will be responsive to their employees. However, unlike in some countries in Europe, the United States does not give employees a lot of say in the governance of firms. Technology firms, such as Amazon and Google, have recently proven to be rather unfriendly to employees that attempt to organize in support of “ethics”. This is for highly conventional reasons: the management of these firms tends to be oriented towards the goal of maximizing shareholder profits, and having organized employees advocating for ethical issues that interfere with business is an obstacle to that goal.

This would be understood plainly if economics, or economic history, was taught as part of “data science ethics”. But it’s not for some reason. Information economics, which would presumably be where one would start to investigate the way incentives drive data science institutions, is perhaps too complex to be included in the essential undergraduate curriculum, despite its being perhaps critical to understanding the “data intensive” social world we all live in now.

We forget today, often, that the original economists (Adam Smith, Alfred Marshall, etc.) were all originally moral philosophers. Economics has begun to be seen as a field designed to be in instrumental support of business practice or ideology rather than an investigation into the ethical consequences of social and material structure. That’s too bad.

Instead of teaching economic history, which would be a great way of showing students the ethical implications of technology, instead Berkeley is teaching Science and Technology Studies (STS) and algorithmic fairness! I’ll quote at length:

A recent trend in incorporating such ethical practices includes
incorporating anti-bias algorithms in the workplace. Starting from
the beginning of their undergraduate education, UC Berkeley students can take History 184D: Introduction to Science, Technology, and Society: Human Contexts and Ethics of Data, which covers the implications of computing, such as algorithmic bias. Additionally, students can take Computer Science 294: Fairness in Machine Learning, which spends a semester in resisting racial, political, and physical discrimination. Faculty have also come together to create the Algorithmic Fairness and Opacity Working Group at Berkeley’s School of Information that brainstorms methods to improve algorithms’ fairness, interpretability, and accountability. Implementing such courses and interdisciplinary groups is key to start the conversation within academic institutions, so students
can mitigate such algorithmic bias when they work in industry or
academia post-graduation.

Databases and algorithms are socio-technical objects; they emerge and evolve in tandem with the societies in which they operate [Latour90]. Understanding data science in this way and recognizing its social implications requires a different kind of critical thinking that is taught in data science courses. Issues such as computational agency [Tufekci15], the politics of data classification and statistical inference [Bowker08], [Desrosieres11], and the perpetuation of social injustice through algorithmic decision making [Eubanks19], [Noble18], [ONeil18] are well known to scholars in the interdisciplinary field of science and technology
studies (STS), who should be invited to participate in the development of data science curricula. STS or other courses in the social sciences and humanities dealing specifically with topics related to data science may be included in data science programs.

This is all very typical. The authors are correct that algorithmic fairness and STS have been trendy ways of teaching data science ethics. It is perhaps too cynical to say that these are trendy approaches to “data science ethics” because they are the data science ethics that Microsoft will pay for. Let that slip as a joke.

However, it is unfortunate if students have no better intellectual equipment for dealing with “data science ethics” than this. Algorithmic fairness is a fascinating field of study with many interesting technical results. However, as has been broadly noted by STS scholars, among others, the successful use of “algorithmic fairness” technology depends on the social context in which it is deployed. Often, “fairness” is achieved through greater scientific and technical integrity: for example, properly deducing cause and effect rather than lazily applying techniques that find correlation. But the ethical challenges in the workplace are often not technical challenges. They are the challenges of managing the economic incentives of the firm, and how these effect the power structures within the firm. (Metcalf et al., 2019) This is apparently not material that is being taught at Berkeley to data science students.

This more careful look at the social context in which technology is being used is supposed to be what STS is teaching. But, all too often, this is not what it’s doing. I’ve written elsewhere why STS is not the solution to “tech ethics”. Part of (e.g. Latourian) STS training is a methodological, if not intellectual, relativistic skepticism about science and technology itself (Carroll, 2006). As a consequence, it requires, of itself, to be a humanistic or anthropological field, using “interpretivist” methods, with weak claims to generalizability. It is, first and foremost, an academic field, not an applied one. The purpose of STS is to generate fascinating critiques.

There are many other social sciences that have different aims, such as the aim of building consensus around what social and economic conditions are in order to motivate political change. These social sciences have ethical import. But they are built around a different theory of change. They are aimed at the student as a citizen in a democracy, not as an employee at a company. And while I don’t underestimate the challenges of advocating for designing education to empower students as public citizens in this economic climate, it must nevertheless be acknowledge, as an ethical matter, that a “data science ethics” curriculum that does not address the politics behind those difficulties will be an anemic one, at best.

There is a productive way forward. It requires, however, interdisciplinary thinking that may be uncomfortable or, in the end, impossible for many established institutions. If students are taught a properly historicized and politically substantive “data science ethics”, not in the mode of an STS-based skepticism about technology and science, but rather as economic history that is informed by data science (computational and inferential thinking) as an intellectual foundation, then ethical considerations would need not be relegated to a hopeful afterthought invested in a theory of corporate change that is ultimately a fantasy. Rather, it would put “data science ethics” on a scientific foundation and help civic education justify itself as a matter of social fact.

Addendum: Since the social sciences aren’t doing this work, it looks like some computer scientists are doing it instead. This report by Narayanan provides a recent economic history of “dark patterns” since the 1970’s–an example of how historical research can put “data science ethics” in context.


Carroll, P. (2006). Science of Science and Reflexivity. Social Forces85(1), 583-585.

Metcalf, J., & Moss, E. (2019). Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics. Social Research: An International Quarterly86(2), 449-476.

Van Dusen, E., Suen, A., Liang, A., & Bhatnagar, A. (2019). Accelerating the Advancement of Data Science Education. Proceedings of the 18th Python in Science Conference (SciPy 2019)

Considering “Neither Hayek nor Habermas”

I recently came upon an article from 2007, Cass Sunstein’s “Neither Hayek nor Habermas”, arguing that “the blogosphere” would have neither as an effective way of gathering knowledge or as a field for consensus-building. There is no price mechanism, so Hayekian principles do not apply. And there is polarization and what would later be called “echo chambers” to prevent real deliberation.

In an era where online “misinformation” is a household concern, this political analysis seems quite prescient. There never was much reason to expect free digital speech to amount to much besides a warped mirror of the public’s preexisting biases.

A problem with both Hayekian and Habermasian theory, when used this way, is the lack of institutional specificity. The free Web is a plurality of interconnected institutions, with content and traffic flowing constantly between differently designed sociotechnical properties. It is an naivete of all forms of liberal thought that useful social structure will arise spontaneously from the interaction between individuals as though through some magnetic force. Rather, social structures precede and condition the very possibility of personhood and discourse in the first place. “Anyone who says differently is selling something.”

Indeed, despite all the noise on the Internet, there are Hayekian accumulations of information wherever there is the institution of the market. One reason why Amazon has become such a compelling force is because of its effective harnessing of reviews on products. Free speech on the Internet has been just fine for the market.

What about for democracy?

If free digital speech has failed to result in valuable political deliberation, it is wrong to fault the social media platforms. Habermas expected that money and power will distort public discourse; a privately-owned social media platform is a manifestation of this distortion. The locus of valuable political deliberation, therefore, must be in specialized public institutions: most notably, those institutions dedicated to legislation and regulation. In other words, it is the legal system that is, at its best, the site of Habermasian discourse. Not Twitter.

If misinformation on the Internet is “a threat to our democracy”, the problem cannot be solved by changing the content moderation policies on commercial social media platforms. The problem can only be solved by fixing those institutions of public relevance where people’s speech acts matter for public policy.

The closest thing to such a Habermasian institution in the Internet today is perhaps the Request for Comments process on adminstrative regulations in the U.S. There, citizens can freely express their policy ideas and those ideas are, when the system is working, moderated and channeled into nuanced changes to policy.

This somewhat obscure and technocratic government function is overshadowed and sometimes overturned by electoral politics in the U.S., which are at this point anything but deliberative. For various reasons concerning the design of electoral and legislative institutions in the U.S., politics is only superficially discursive. It is in fact a power play, a competition over rents. Under such conditions, we would expect “misinformation” to thrive, because public opinion is mostly inconsequential. There is nothing, pragmatically, to incentivize and ground the hard work of deliberation.

It is perhaps interesting to imagine what kind of self-governing institution would deserve this kind of investment of deliberation.


Benthall, Sebastian. “Designing networked publics for communicative action.” Interface 1.1 (2015): 3.

Bruns, Axel. “It’s not the technology, stupid: How the ‘Echo Chamber’and ‘Filter Bubble’metaphors have failed us.” (2019).

Sunstein, Cass R. “Neither Hayek nor Habermas.” Public Choice 134.1-2 (2008): 87-95.

Contradictions in Freedom: the U.S. / China information ideology divide

Reflecting on H.R. McMaster’s How China Sees the World essay about the worldview of China’s government and how it is at odds with U.S. culture and interests, I am struck by how much of these tensions are about information ideology. By information ideology, I mean “information ethics”, but applied to legitimize state power.

I certainly don’t claim any expertise on the subject of China–I’ve never been there! But McMaster’s argument, as written, is revealing. McMaster is pointing the ambiguity of China’s position: it is both ambitious and insecure. But it is just as revealing of the contradictions in U.S. information ideology as it is of the CCP’s political ambitions.

The distinctions McMaster draws between China and the U.S. are familiar. Rather than become “more like the West” as it modernizes, China is developing and building a different model. McMaster identifies several features of Chinese internal and foreign policy, which he claims is inspired by a historical period in which China was a major world power able to exact tribute from less powerful states.

  • Suppression of internal dissent–include Tibet, and religious groups.
  • Creation of a surveillance apparatus.
  • Aligning the ideology taught in the universities with the state’s ideological interest.
  • An economic policy geared towards extracting “tribute”–which is another way of saying that they are trying to capture surplus. The economic policies include:
    • “Made in China 2025” — becoming a science and technology leader. McMaster criticizes the part of this policy which involves forced technology transfer for foreign firms trying to access the Chinese market.
    • The “Belt and Road Initiative”: lending money to other countries for infrastructure improvements, which then means clientele nations are debtors.
    • “Military-Civil Fusion” — All citizens and organizations are part of the state intelligence system. This means that Chinese companies and researchers, even when acquiring and researching at foreign companies or universities, are encouraged to feed technology back up to the state.

McMaster’s critique of China, then, starts with human rights abuses but settles on the problem of “cybertheft”–the transfer of technology to the Chinese state from U.S. funded research labs and companies.

This transfer is both militarily and economically significant. From the perspective of a self-interested U.S. policy, these criticisms are alarming. But the blending of the human rights moralizing with the economic complaint is revelatory of McMaster’s own information ideology. The writing blends the human rights interests of individuals and the economic interests of large corporations as if this were a seamless logical transition. In reality, this is not a coherent line of reasoning.

Chinese espionage is successful in part because the party is able to induce cooperation, wittingly or unwittingly, from individuals, companies, and political leaders. Companies in the United States and other free-market economies often do not report theft of their technology, because they are afraid of losing access to the Chinese market, harming relationships with customers, or prompting federal investigations.

Here, for example, the idea that Chinese espionage is subversively undermining the will of individuals is blended together with what we must presume is an explicit technology transfer requirement for foreign companies trying to sell to the Chinese market. The first is an Orwellian dystopia. The second is a form of overt trade policy. It is strange that McMaster doesn’t see a bright line of difference between these two ways of doing “espionage”.

The collapsing of American information ideology is even clearer in McMaster’s articulation of “Western liberal” strengths. Putting aside whether, as Goldsmith and Woods have recently argued, U.S. content moderation strategies are looking more like Chinese ones all the time, there is something dubious about McMaster’s appeal to the perhaps greatest of U.S. freedoms, the freedom of speech, given his preceding argument:

For one thing, those “Western liberal” qualities that the Chinese see as weaknesses are actually strengths. The free exchange of information and ideas is an extraordinary competitive advantage, a great engine of innovation and prosperity. (One reason Taiwan is seen as such a threat to the People’s Republic is because it provides a small-scale yet powerful example of a successful political and economic system that is free and open rather than autocratic and closed.) Freedom of the press and freedom of expression, combined with robust application of the rule of law, have exposed China’s predatory business tactics in country after country—and shown China to be an untrustworthy partner. Diversity and tolerance in free and open societies can be unruly, but they reflect our most basic human aspirations—and they make practical sense too. Many Chinese Americans who remained in the United States after the Tiananmen Square massacre were at the forefront of innovation in Silicon Valley.

It is ironic that given McMaster’s core criticism of China is its effectiveness as causing information and ideas to flow into its security apparatus for the sake of its prosperity, he chooses to highlight freedom of expression as the key to U.S. and liberal innovation. While I personally agree that “freedom of expression” is good for science and innovation, McMaster apparently doesn’t see how limiting technology transfer is itself a limitation on the freedom of exchange of information.

McMaster uses the term “rule of law” here to mean primarily, it would seem, the enforcement of intellectual property rights. However, some of the cases he raises as problematic are those where a corporation trades access to IP in return for market access. This could be seen as a violation of IP. But it might be more productive to view it more objectively as a trade–perhaps a trade that in the long run is not in the interest of the U.S. security state, but one that many private companies willingly engaged in. Elsewhere, McMaster points to the technology transfer via Chinese researchers from U.S. funded university research labs. While upsetting the geopolitical balance of power, there are many who think that this is actually how university research labs are supposed to work. Science is at its best with “freedom”, with public results, in part because it is the exposure to public criticism by the international community of scientists that gives its results legitimacy.

Viewed from the perspective of open scientific cooperation, McMaster’s main complaint against China boils down to the idea that it is free-riding, in the economic sense, on U.S. investments in science and technology. This is irksome but also in a real sense how scientific progress is supposed to go. McMaster’s recommendations amount to economic and intellectual sanctioning of China: excluding its companies from the stock market, and punishing U.S. companies that knowingly aid in China’s human rights abuses. However well-motivated these ideas, they don’t resolve the core problem at the heart of these relations.

That problem is this: the U.S.’s international leadership has involved, in part, its enforcement of intellectual property rights. These intellectual property rights have allowed U.S. companies to extract rents and have prevented other countries from developing competitive militaries. U.S. technological supremacy has, among other things, made the U.S. an effective exporter of military technology. But this export trade only works if other countries cannot reverse engineer the technology. In some cases, they have been prevented in doing this by “rule of law”–U.S. led international law–but not that soft power is fading.

So McMaster’s policy recommendations are an attempt to carve out a separate sphere of influence in which U.S. intellectual property titles are maintained. This boils down to the idea that in some places, U.S. telecom companies should continue to extract IP rents, instead of Chinese state-owned telecom.

McMaster argues for “strategic empathy”–seeing the world the way the “other” sees it. But a simpler approach might be viewing the world “strategically”–i.e., in terms of incentives and the balance of power in the world. A question facing the U.S. going forward is whether it can make being a tributary of the U.S. intellectual property regime (not to mention debt regime–discussing the history of the IMF is out of scope of this post) more compelling than being a tributary of the Chinese state. For that to work, it may need to get better clarity about its own ideological interests, and stop conflating its economic incentives with moralistic flappery.

Tech Law and Political Economy

It has been an intellectually exciting semester at NYU’s Information Law Institute and regular, more open research meeting, the Privacy Research Group. More than ever in my experience, we’ve been developing a clarity about the political economy of technology together. I am especially grateful to my colleagues Aaron Shapiro, Salome Viljoen, and Jake Goldenfein for introducing me to a lot of very enlightening new literature. This blog post summarizes what I’ve recently been exposed to via these discussions.

  • Perhaps kicking off the recent shift in thinking about law and political economy is the long-time-coming publication of Julie Cohen’s book, Between Truth and Power. While many of the arguments have been available in article form for some time, the book gives these arguments more gravitas, and enabled Cohen to do a bit of a speaking tour in the NYC area some months ago. Having a heavy-hitter in the field deliver such authoritative and incisive analysis has been, in my opinion, empowering to my generation of scholars whose critical views have not enjoyed the same legitimacy. Exposure to this has sent my own work in a new direction recently.
  • In a complementary move inspired perhaps by the political climate around the Democratic primary, the ILI group has been getting acquainted with the Law and Political Economy (LPE) field/attitude/blog. Perhaps best described as a left wing, institutionalist legal realist school of thought, the position is articulated in the referenced article by Britton-Purdy et al. (2020), in this manifesto, and more broadly on this blog. The mastermind of the movement is apparently Amy Kapczynski, but there are many fellow travelers–some internet luminaries, some very approachable colleagues. The tent seems inclusive.
  • LPE is, of course, a response to and play on “Law and Economics”, the once-dominant field of legal scholarship that legitimized so much neoliberal policy-making. What is nice about LPE is that beyond being a rehash of “critical” legal attitudes, LPE grounds itself in economic analysis, albeit in a more expansive form of economic understanding that includes social structures that affect, for example, social group inequalities. This creates room for, by providing a policy-oriented audience, heterodox economic views. Jake Goldenfein and I have a paper that we are excited to publish soon, “Data Science and the Decline of Liberal Law and Ethics”, which takes aim at the individualist assumptions of liberal regulatory regimes and their insufficiency in regulating platform companies. I don’t think we had LPE in mind as we wrote that article, but I believe it will be a fresh complementary view. Unfortunately, the conference where we planned to present it has been delayed by COVID.
  • Once the question of the real political economy of technology is raised, it opens up a deep theoretical can of worms that is as far as I can tell fractured across a variety of fields. One major source of confusion here is that Economics itself, as a field, doesn’t seem to have a stable conclusion about the role of technology in the economy. An insightful look into the history of Economics and its inability to correctly categorize technology–especially technology as a facet of capital–can be found in Nitzan (1998). Nitzan elucidates a distinction from Veblen (!) between industry and business: industry aims to produce; business aims to make money. And capitalism, argues Nitzan, winds up ultimately being about the capacity of absentee owners to claim sources of revenue. The distinction between these fields explains why business so often restricts production. As we noted in our ILI discussion, this is immediately relevant to anything digital, because intellectual property is always a way of restricting production in order to make a source of revenue.
  • I take a somewhat more balanced view myself, seeing an economy with more than one kind of capital in it. I’m fairly Bourdieusian in this way. On this point, I’ve had recommended to me Sadowski’s (2019) article that explicitly draws the line from Marx to Bourdieu and connects it with the contemporary digital economy. This is on a new short list for me.


Benthall, S, and Goldenfein, J., forthcoming. Data Science and the Decline of Liberal Law and Ethics. Ethics of Data Science Conference 2020.

Britton-Purdy, J.S., Grewal, D.S., Kapczynski, A. and Rahman, K.S., 2020. BUILDING A LAW-AND-POLITICAL-ECONOMY FRAMEWORK: BEYOND THE TWENTIETH-CENTURY SYNTHESIS. Yale Law Journal, Forthcoming.

Nitzan, J., 1998. Differential accumulation: towards a new political economy of capital. Review of international political economy5(2), pp.169-216.

Sadowski, J., 2019. When data is capital: Datafication, accumulation, and extraction. Big Data & Society6(1), p.2053951718820549.

Internet service providers are utilities

On Sunday, New York State is closing all non-essential brick-and-mortar businesses and ordering all workforce who are able to work from home. Zoom meetings from home are now the norm for people working for both the private sector and government.

One might reasonably want to know whether the internet service providers (ISP) are operating normally during this period. I had occasion to call up Optimum yesterday and ask. I was told, very helpfully, “Were doing business as usual because we are like a utility.”

It’s quite clear that the present humane and responsible approach to COVID-19 depends on broad and uninterrupted access to the Internet to homes. The government and businesses would cease to function without it. Zoom meetings are performing the role that simple audio telephony once did. And executive governments are recognizing this as they use their emergency powers.

There has been a strain of “technology policy” thought that some parts of “the tech sector” should be regulated as utilities. In 2015, the FCC reclassified broadband access as a utility as part of their Net Neutrality decision. In 2018, this position was reversed. This was broadly seen as a win for the telecom companies.

One plausible political consequence of COVID-19 is the reconsideration of the question of whether ISPs are utilities or not. They are.