Digifesto

Managerialism and Habermas

Managerialism is an “in” topic recently in privacy scholarship (Cohen, 2019; Waldman, 2019). In Waldman’s (2019) formulation, the managerialism problem is, roughly: privacy regulations are written with a certain substantive intent, but the for-profit firms that are the object of these regulations interpret them either as a bothersome constraint on otherwise profitable activity, or else as means to the ends of profitability, efficiency, and so on themselves. In other words, the substance of the regulations are subjugated to the substance of the goals of corporate management. Managerialism.

This is exactly what anybody who has worked in a corporate tech environment would expect. The scholarly accomplishment of presenting these bare facts to a legal academic audience is significant because employees of these corporations are most often locked up by strict NDAs. So while the point is obvious, I mean that in the positive sense that it should be taken as an unquestioned background assumption from now on, not that it shouldn’t have been “discovered” by this field in a different way.

As a “critical” observation, it stands. It raises a few questions:

  • Is this a problem?
  • If so, for whom?
  • If so, what can be done about it?

Here the “critical” method reaches, perhaps, its limits. Notoriously, critical scholarship plays on its own ambiguity, dancing between the positions of “criticism”, or finding of actionable fault, and “critique”, a merely descriptive account that is at most suggestive of action. This ambiguity preserves the standing of the critical scholar. They need never be wrong.

Responding to the situation revealed by this criticism requires a differently oriented kind of work.

Habermas and human interests

A striking about the world of policy and legal scholarship in the United States is that nobody is incentivized to teach or read anything written by past generations, however much it synthesized centuries of knowledge, and so nothing ever changes. For example, arguably, Habermas’s Knowledge and Human Interests (KHI), originally published 1972, arguably lays out the epistemological framework we would want to understand the managerialism issue raised by recent scholars. We should expect Habermas to anticipate the problems raised by capitalism in the 21st century because his points are based on a meticulously constructed, historically informed, universalist, transcendental form of analysis. This sort of analysis is not popular in the U.S.; I have my theories about why. But I digress.

A key point from Habermas (who is summing up and reiterating a lot of other work originating, if it’s possible to say any such thing meaningfully, in Max Weber) is that it’s helpful to differentiate between different kinds of knowledge based on the “human interests” that motivate them. In one formulation (the one in KHI), there are three categories:

  1. The technical interest (from techne) in controlling nature, which leads to the “empirical-analytic”, or positivist, sciences. These correspond to fields like engineering and the positivist social sciences.
  2. The pragmatic interest (from praxis), in mutual understanding which would guide collective action and the formation of norms, leads to the “hermeneutic” sciences. These correspond to fields like history and anthropology and other homes of “interpretivist” methods.
  3. The emancipatory interest, in exposing what has been falsely reified as objective fact as socially contingent. This leads to the critical sciences, which I suppose corresponds to what is today media studies.

This is a helpful breakdown, though I should say it’s not Habermas’s “mature” position, which is quite a bit more complicated. However, it is useful for the purposes of this post because it tracks the managerialist situation raised by Waldman so nicely.

I’ll need to elaborate on one more thing before applying this to the managerialist framing, which is to skip past several volumes of Habermas’s ouvre and get to Theory of Communicative Action, volume II, where he gets to the punchline. By now he’s developed the socially pragmatic interest to be the basis for “communicative rationality”, a discursive discipline in which individual interests are excluded and instead a diversely perspectival but nevertheless measured conversation about the way the social world should normatively be ordered. But where is this field in actuality? Money and power, the “steering media”, are always mussing up this conversation in the “public sphere”. So “public discourse” becomes a very poor proxy for communicative action. Rather–and this is the punchline–the actually existing field of communicative rationality, which is establishing substantive norms while nevertheless being “disinterested” with respect to the individual participants, is the law. That’s what the legal scholarship is for.

Applying the Habermasian frame to managerialism

So here’s what I think is going on. Waldman is pointing out that whereas regulations are being written with a kind of socially pragmatic interest in their impact on the imagined field of discursively rational participants as represented by legal scholarship, corporate managers are operating in the technical mode in order to, say, maximized shareholder profits as is their legally mandated fiduciary duty. And so the meaning of the regulation changes. Because words don’t contain meaning but rather take their meaning from the field in which they operate. A privacy policy that once spoke to human dignity gets misheard and speaks instead to inconvenience of compliance costs and a PR department’s assessment of the competitive benefit of user’s trust.

I suppose this is bothersome from the legal perspective because it’s a bummer when something one feels is an important accomplishment of one’s field is misused by another. But I find the professional politics here, as everywhere, a bit dull and petty.

Crucially, the managerialism problem is not dumb and petty–I wouldn’t be writing all this if I thought so. However, the frustrating aspect of this discourse is that because of the absence of philosophical grounding in this debate, it misses what’s at stake. This is unfortunately characteristic of much American legal analysis. It’s missing because when American scholars address this problem, they do so primarily in the descriptive critical mode, one that is empirical and in a sense positivist, but without the interest in control. This critical mode leads to cynicism. It rarely leads to collective action. Something is missing.

Morality

A missing piece of the puzzle, one which cannot ever be accomplished through empirical descriptive work, is the establishment of the moral consequence of managerialism which is that human beings are being treated as means and not ends, in contradiction with the Kantian categorical imperative, or something like that. Indeed, it is this flavor of moral maxim that threads its way up through Marx into the Frankfurt School literature with all of its well-trod condemnation of instrumental reason and the socially destructive overreach of private capital. This is, of course, what Habermas was going on about in the first place: the steering media, the technical interest, positivist science, etc. as the enemy of politically legitimate praxis based on the substantive recognition of the needs and rights of all by all.

It would be nice, one taking this hard line would say, if all laws were designed with this kind of morality in mind, and if everybody who followed them did so out of a rationally accepted understanding of their import. That would be a society that respected human dignity.

We don’t have that. Instead, we have managerialism. But we’ve known this for some time. All these critiques are effectively mid 20th century.

So now what?

If the “problem” of managerialism is that when regulations reach the firms that they are meant to regulate their meaning changes into an instrumentalist distortion of the original, one might be tempted to try to combat this tendency with an even more forceful use of hermeneutic discourse or an intense training in the social pragmatic stance such that employees of these companies put up some kind of resistance to the instrumental, managerial mindset. That strategy neglects the very real possibility that those employees that do not embrace the managerial mindset will be fired. Only in the most rarified contexts does discourse propel itself with its one force. We must presume that in the corporate context the dominance of managerialist discourse is in part due to a structural selection effect. Good managers lead the company, are promoted, and so on.

So the angle on this can’t be a discursive battle with the employees of regulated firms. Rather, it has to be about corporate governance. This is incidentally absolutely what bourgeois liberal law ought to be doing, in the sense that it’s law as it applies to capital owners. I wonder how long it will be before privacy scholars begin attending to this topic.

References

Benthall, S. (2015). Designing networked publics for communicative action. Interface1(1), 3.

Bohman, J., & Rehg, W. (2007). Jürgen habermas.

Cohen, J. E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford University Press, USA.

Habermas, J. (2015). Knowledge and human interests. John Wiley & Sons.

Waldman, A. E. (2019). Privacy Law’s False Promise. Washington University Law Review97(3).

Land value taxation

Henry George’s Progress and Poverty, first published in 1879, is dedicated

TO THOSE WHO, SEEING THE VICE AND MISERY THAT SPRING FROM THE UNEQUAL DISTRIBUTION OF WEALTH AND PRIVILEGE, FEEL THE POSSIBILITY OF A HIGHER SOCIAL STATE AND WOULD STRIVE FOR ITS ATTAINMENT

The book is best known as an articulation of the idea of a “Single Tax [on land]”, a circa 1900 populist movement to replace all taxes with a single tax on land value. This view influence many later land reform and taxation policies around the world; the modern name for this sort of policy is Land Value Taxation (LVT).

The gist of LVT is that the economic value of owning land comes both from the land itself and improvements built on top of it. The value of the underlying land over time is “unearned”–it does not require labor to maintain, it comes mainly from the artificial monopoly right over its use. This can be taxed and redistributed without distorting incentives in the economy.

Phillip Bess’s 2018 article provides an excellent summary of the economic arguments in favor of LVT. Michel Bauwen’s P2P Foundation article summaries where it has been successfully in place. Henry George was an American, but Georgism has been largely an export. General MacArthur was, it has been said, a Georgist, and this accounts for some of the land reform in Asian countries after World War II. Singapore, which owns and rents all of its land, is organized under roughly Georgist principles.

This policy is neither “left” nor “right”. Wikipedia has sprouted an article on geolibertarianism, a term that to me seems a bit sui generis. The 75th-anniversary edition of Progress and Poverty, published 1953, points out that one of the promises of communism is land reform, but it argues that this is a false promise. Rather, Georgist land reform is enlightened and compatible with market freedoms, etc.

I’ve recently dug up my copy of Progress and Poverty and begun to read it. I’m interested in mining it for ideas. What is most striking about it, to a contemporary reader, is the earnest piety of the author. Henry George was clearly a quite religious man, and wrote his lengthy and thorough political-economic analysis of land ownership out of a sincere belief that he was promoting a new world order which would preserve civilization from collapse under the social pressures of inequality.

some PLSC 2020 notes: one framing of the managerialism puzzle

PLSC 2020 was quite interesting this year.

There were a number of threads I’d like to follow up on. One of them has to do with managerialism and the ability of the state (U.S. in this context) to regulate industry.

I need to do some reading to fill some gaps in my understanding, but this is how I understand the puzzle so far.

Suppose the state wants to regulate industry. Congress passes a bill creating an agency with regulatory power with some broadly legislated mandate. The agency comes up with regulations. Businesses then implement policies to comply with the regulation. That’s how it’s supposed to go.

But in practice, there is a lot of translational work being done here. The broadly legislated mandate will be in a language that can get passed by Congress. It delegates elaboration on the specifics to the expert regulators in the agency; these regulators might be lawyers. But when the corporate bosses get the regulations (maybe from their policy staff, also lawyers?) they begin to work with it in a “managerialist” way. This means, I gather, that they manage the transition towards compliance, but in a way that minimizes the costs of compliance. If they can comply without adhering to the purpose of the regulation–which might be ever-so-clear to the lawyers who dreamed it up–so be it.

This seems all quite obvious. Of course it would work this way. If I gather correctly at this point (and maybe I don’t), the managerialist problem is: because of the translational work going on between legislate intent through to administrative regulation into corporate policy into implementation, there’s a lot of potential to have information “lost in translation”, and this information loss works to the advantage of the regulated corporation, because it is using all that lost regulatory bandwidth to its advantage.

We should teach economic history (of data) as “data science ethics”.

I’ve recently come across an interesting paper published at Scipy 2019, Dusen et al.’s “Accelerating the Advancement of Data Science Education” (2019) (link). It summarizes recent trends in data science education, as modeled by UC Berkeley’s Division of Data Science, which is now the Division of Computing, Data Science, and Society (CDSS). This is a striking piece to me as I worked at Berkeley on its data science capabilities several years ago and continue to be fascinated by my alma mater, the School of Information, as it navigates being part of CDSS.

Among other interesting points in the article, two are particularly noteworthy to me. The first is that the integration of data science into the social sciences appears to have continued apace. The article mentions that data science’s integration into the social science has continued apace. Economics, in particular, is well represented and supported in the extended data science curriculum.

The other interesting point is the emphasis on data science ethics as an essential pillar of the educational program. The writing in this piece is consistent with what I’ve come to expect from Berkeley on this topic, and I believe it’s indicative of broad trends in academia.

The authors of this piece are explicit about their “theory of change”. What is data science ethics education supposed to accomplish?

Including training in ethical considerations at all levels of society and all steps of the data science workflow in undergraduate data science curricula could play an important role in stimulating change in industry as our students enter the workforce, perhaps encouraging companies to add ethical standards to their mission statements or to hire chief ethics officers to oversee not only day-to-day operations but also the larger social consequences of their work.

The theory of change articulated by the paper is that industry will change if ethically educated students enter the workforce. They see a future where companies change their mission statements in accord with what has been taught in data science ethics courses, or hire oversight officials.

This is, it must be noted, broadly speculative, and implies that the leadership of the firms who hire these Berkeley grads will be responsive to their employees. However, unlike in some countries in Europe, the United States does not give employees a lot of say in the governance of firms. Technology firms, such as Amazon and Google, have recently proven to be rather unfriendly to employees that attempt to organize in support of “ethics”. This is for highly conventional reasons: the management of these firms tends to be oriented towards the goal of maximizing shareholder profits, and having organized employees advocating for ethical issues that interfere with business is an obstacle to that goal.

This would be understood plainly if economics, or economic history, was taught as part of “data science ethics”. But it’s not for some reason. Information economics, which would presumably be where one would start to investigate the way incentives drive data science institutions, is perhaps too complex to be included in the essential undergraduate curriculum, despite its being perhaps critical to understanding the “data intensive” social world we all live in now.

We forget today, often, that the original economists (Adam Smith, Alfred Marshall, etc.) were all originally moral philosophers. Economics has begun to be seen as a field designed to be in instrumental support of business practice or ideology rather than an investigation into the ethical consequences of social and material structure. That’s too bad.

Instead of teaching economic history, which would be a great way of showing students the ethical implications of technology, instead Berkeley is teaching Science and Technology Studies (STS) and algorithmic fairness! I’ll quote at length:

A recent trend in incorporating such ethical practices includes
incorporating anti-bias algorithms in the workplace. Starting from
the beginning of their undergraduate education, UC Berkeley students can take History 184D: Introduction to Science, Technology, and Society: Human Contexts and Ethics of Data, which covers the implications of computing, such as algorithmic bias. Additionally, students can take Computer Science 294: Fairness in Machine Learning, which spends a semester in resisting racial, political, and physical discrimination. Faculty have also come together to create the Algorithmic Fairness and Opacity Working Group at Berkeley’s School of Information that brainstorms methods to improve algorithms’ fairness, interpretability, and accountability. Implementing such courses and interdisciplinary groups is key to start the conversation within academic institutions, so students
can mitigate such algorithmic bias when they work in industry or
academia post-graduation.


Databases and algorithms are socio-technical objects; they emerge and evolve in tandem with the societies in which they operate [Latour90]. Understanding data science in this way and recognizing its social implications requires a different kind of critical thinking that is taught in data science courses. Issues such as computational agency [Tufekci15], the politics of data classification and statistical inference [Bowker08], [Desrosieres11], and the perpetuation of social injustice through algorithmic decision making [Eubanks19], [Noble18], [ONeil18] are well known to scholars in the interdisciplinary field of science and technology
studies (STS), who should be invited to participate in the development of data science curricula. STS or other courses in the social sciences and humanities dealing specifically with topics related to data science may be included in data science programs.

This is all very typical. The authors are correct that algorithmic fairness and STS have been trendy ways of teaching data science ethics. It is perhaps too cynical to say that these are trendy approaches to “data science ethics” because they are the data science ethics that Microsoft will pay for. Let that slip as a joke.

However, it is unfortunate if students have no better intellectual equipment for dealing with “data science ethics” than this. Algorithmic fairness is a fascinating field of study with many interesting technical results. However, as has been broadly noted by STS scholars, among others, the successful use of “algorithmic fairness” technology depends on the social context in which it is deployed. Often, “fairness” is achieved through greater scientific and technical integrity: for example, properly deducing cause and effect rather than lazily applying techniques that find correlation. But the ethical challenges in the workplace are often not technical challenges. They are the challenges of managing the economic incentives of the firm, and how these effect the power structures within the firm. (Metcalf et al., 2019) This is apparently not material that is being taught at Berkeley to data science students.

This more careful look at the social context in which technology is being used is supposed to be what STS is teaching. But, all too often, this is not what it’s doing. I’ve written elsewhere why STS is not the solution to “tech ethics”. Part of (e.g. Latourian) STS training is a methodological, if not intellectual, relativistic skepticism about science and technology itself (Carroll, 2006). As a consequence, it requires, of itself, to be a humanistic or anthropological field, using “interpretivist” methods, with weak claims to generalizability. It is, first and foremost, an academic field, not an applied one. The purpose of STS is to generate fascinating critiques.

There are many other social sciences that have different aims, such as the aim of building consensus around what social and economic conditions are in order to motivate political change. These social sciences have ethical import. But they are built around a different theory of change. They are aimed at the student as a citizen in a democracy, not as an employee at a company. And while I don’t underestimate the challenges of advocating for designing education to empower students as public citizens in this economic climate, it must nevertheless be acknowledge, as an ethical matter, that a “data science ethics” curriculum that does not address the politics behind those difficulties will be an anemic one, at best.

There is a productive way forward. It requires, however, interdisciplinary thinking that may be uncomfortable or, in the end, impossible for many established institutions. If students are taught a properly historicized and politically substantive “data science ethics”, not in the mode of an STS-based skepticism about technology and science, but rather as economic history that is informed by data science (computational and inferential thinking) as an intellectual foundation, then ethical considerations would need not be relegated to a hopeful afterthought invested in a theory of corporate change that is ultimately a fantasy. Rather, it would put “data science ethics” on a scientific foundation and help civic education justify itself as a matter of social fact.

Addendum: Since the social sciences aren’t doing this work, it looks like some computer scientists are doing it instead. This report by Narayanan provides a recent economic history of “dark patterns” since the 1970’s–an example of how historical research can put “data science ethics” in context.

References

Carroll, P. (2006). Science of Science and Reflexivity. Social Forces85(1), 583-585.

Metcalf, J., & Moss, E. (2019). Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics. Social Research: An International Quarterly86(2), 449-476.

Van Dusen, E., Suen, A., Liang, A., & Bhatnagar, A. (2019). Accelerating the Advancement of Data Science Education. Proceedings of the 18th Python in Science Conference (SciPy 2019)

Considering “Neither Hayek nor Habermas”

I recently came upon an article from 2007, Cass Sunstein’s “Neither Hayek nor Habermas”, arguing that “the blogosphere” would have neither as an effective way of gathering knowledge or as a field for consensus-building. There is no price mechanism, so Hayekian principles do not apply. And there is polarization and what would later be called “echo chambers” to prevent real deliberation.

In an era where online “misinformation” is a household concern, this political analysis seems quite prescient. There never was much reason to expect free digital speech to amount to much besides a warped mirror of the public’s preexisting biases.

A problem with both Hayekian and Habermasian theory, when used this way, is the lack of institutional specificity. The free Web is a plurality of interconnected institutions, with content and traffic flowing constantly between differently designed sociotechnical properties. It is an naivete of all forms of liberal thought that useful social structure will arise spontaneously from the interaction between individuals as though through some magnetic force. Rather, social structures precede and condition the very possibility of personhood and discourse in the first place. “Anyone who says differently is selling something.”

Indeed, despite all the noise on the Internet, there are Hayekian accumulations of information wherever there is the institution of the market. One reason why Amazon has become such a compelling force is because of its effective harnessing of reviews on products. Free speech on the Internet has been just fine for the market.

What about for democracy?

If free digital speech has failed to result in valuable political deliberation, it is wrong to fault the social media platforms. Habermas expected that money and power will distort public discourse; a privately-owned social media platform is a manifestation of this distortion. The locus of valuable political deliberation, therefore, must be in specialized public institutions: most notably, those institutions dedicated to legislation and regulation. In other words, it is the legal system that is, at its best, the site of Habermasian discourse. Not Twitter.

If misinformation on the Internet is “a threat to our democracy”, the problem cannot be solved by changing the content moderation policies on commercial social media platforms. The problem can only be solved by fixing those institutions of public relevance where people’s speech acts matter for public policy.

The closest thing to such a Habermasian institution in the Internet today is perhaps the Request for Comments process on adminstrative regulations in the U.S. There, citizens can freely express their policy ideas and those ideas are, when the system is working, moderated and channeled into nuanced changes to policy.

This somewhat obscure and technocratic government function is overshadowed and sometimes overturned by electoral politics in the U.S., which are at this point anything but deliberative. For various reasons concerning the design of electoral and legislative institutions in the U.S., politics is only superficially discursive. It is in fact a power play, a competition over rents. Under such conditions, we would expect “misinformation” to thrive, because public opinion is mostly inconsequential. There is nothing, pragmatically, to incentivize and ground the hard work of deliberation.

It is perhaps interesting to imagine what kind of self-governing institution would deserve this kind of investment of deliberation.

References

Benthall, Sebastian. “Designing networked publics for communicative action.” Interface 1.1 (2015): 3.

Bruns, Axel. “It’s not the technology, stupid: How the ‘Echo Chamber’and ‘Filter Bubble’metaphors have failed us.” (2019).

Sunstein, Cass R. “Neither Hayek nor Habermas.” Public Choice 134.1-2 (2008): 87-95.

Contradictions in Freedom: the U.S. / China information ideology divide

Reflecting on H.R. McMaster’s How China Sees the World essay about the worldview of China’s government and how it is at odds with U.S. culture and interests, I am struck by how much of these tensions are about information ideology. By information ideology, I mean “information ethics”, but applied to legitimize state power.

I certainly don’t claim any expertise on the subject of China–I’ve never been there! But McMaster’s argument, as written, is revealing. McMaster is pointing the ambiguity of China’s position: it is both ambitious and insecure. But it is just as revealing of the contradictions in U.S. information ideology as it is of the CCP’s political ambitions.

The distinctions McMaster draws between China and the U.S. are familiar. Rather than become “more like the West” as it modernizes, China is developing and building a different model. McMaster identifies several features of Chinese internal and foreign policy, which he claims is inspired by a historical period in which China was a major world power able to exact tribute from less powerful states.

  • Suppression of internal dissent–include Tibet, and religious groups.
  • Creation of a surveillance apparatus.
  • Aligning the ideology taught in the universities with the state’s ideological interest.
  • An economic policy geared towards extracting “tribute”–which is another way of saying that they are trying to capture surplus. The economic policies include:
    • “Made in China 2025” — becoming a science and technology leader. McMaster criticizes the part of this policy which involves forced technology transfer for foreign firms trying to access the Chinese market.
    • The “Belt and Road Initiative”: lending money to other countries for infrastructure improvements, which then means clientele nations are debtors.
    • “Military-Civil Fusion” — All citizens and organizations are part of the state intelligence system. This means that Chinese companies and researchers, even when acquiring and researching at foreign companies or universities, are encouraged to feed technology back up to the state.

McMaster’s critique of China, then, starts with human rights abuses but settles on the problem of “cybertheft”–the transfer of technology to the Chinese state from U.S. funded research labs and companies.

This transfer is both militarily and economically significant. From the perspective of a self-interested U.S. policy, these criticisms are alarming. But the blending of the human rights moralizing with the economic complaint is revelatory of McMaster’s own information ideology. The writing blends the human rights interests of individuals and the economic interests of large corporations as if this were a seamless logical transition. In reality, this is not a coherent line of reasoning.

Chinese espionage is successful in part because the party is able to induce cooperation, wittingly or unwittingly, from individuals, companies, and political leaders. Companies in the United States and other free-market economies often do not report theft of their technology, because they are afraid of losing access to the Chinese market, harming relationships with customers, or prompting federal investigations.

Here, for example, the idea that Chinese espionage is subversively undermining the will of individuals is blended together with what we must presume is an explicit technology transfer requirement for foreign companies trying to sell to the Chinese market. The first is an Orwellian dystopia. The second is a form of overt trade policy. It is strange that McMaster doesn’t see a bright line of difference between these two ways of doing “espionage”.

The collapsing of American information ideology is even clearer in McMaster’s articulation of “Western liberal” strengths. Putting aside whether, as Goldsmith and Woods have recently argued, U.S. content moderation strategies are looking more like Chinese ones all the time, there is something dubious about McMaster’s appeal to the perhaps greatest of U.S. freedoms, the freedom of speech, given his preceding argument:

For one thing, those “Western liberal” qualities that the Chinese see as weaknesses are actually strengths. The free exchange of information and ideas is an extraordinary competitive advantage, a great engine of innovation and prosperity. (One reason Taiwan is seen as such a threat to the People’s Republic is because it provides a small-scale yet powerful example of a successful political and economic system that is free and open rather than autocratic and closed.) Freedom of the press and freedom of expression, combined with robust application of the rule of law, have exposed China’s predatory business tactics in country after country—and shown China to be an untrustworthy partner. Diversity and tolerance in free and open societies can be unruly, but they reflect our most basic human aspirations—and they make practical sense too. Many Chinese Americans who remained in the United States after the Tiananmen Square massacre were at the forefront of innovation in Silicon Valley.

It is ironic that given McMaster’s core criticism of China is its effectiveness as causing information and ideas to flow into its security apparatus for the sake of its prosperity, he chooses to highlight freedom of expression as the key to U.S. and liberal innovation. While I personally agree that “freedom of expression” is good for science and innovation, McMaster apparently doesn’t see how limiting technology transfer is itself a limitation on the freedom of exchange of information.

McMaster uses the term “rule of law” here to mean primarily, it would seem, the enforcement of intellectual property rights. However, some of the cases he raises as problematic are those where a corporation trades access to IP in return for market access. This could be seen as a violation of IP. But it might be more productive to view it more objectively as a trade–perhaps a trade that in the long run is not in the interest of the U.S. security state, but one that many private companies willingly engaged in. Elsewhere, McMaster points to the technology transfer via Chinese researchers from U.S. funded university research labs. While upsetting the geopolitical balance of power, there are many who think that this is actually how university research labs are supposed to work. Science is at its best with “freedom”, with public results, in part because it is the exposure to public criticism by the international community of scientists that gives its results legitimacy.

Viewed from the perspective of open scientific cooperation, McMaster’s main complaint against China boils down to the idea that it is free-riding, in the economic sense, on U.S. investments in science and technology. This is irksome but also in a real sense how scientific progress is supposed to go. McMaster’s recommendations amount to economic and intellectual sanctioning of China: excluding its companies from the stock market, and punishing U.S. companies that knowingly aid in China’s human rights abuses. However well-motivated these ideas, they don’t resolve the core problem at the heart of these relations.

That problem is this: the U.S.’s international leadership has involved, in part, its enforcement of intellectual property rights. These intellectual property rights have allowed U.S. companies to extract rents and have prevented other countries from developing competitive militaries. U.S. technological supremacy has, among other things, made the U.S. an effective exporter of military technology. But this export trade only works if other countries cannot reverse engineer the technology. In some cases, they have been prevented in doing this by “rule of law”–U.S. led international law–but not that soft power is fading.

So McMaster’s policy recommendations are an attempt to carve out a separate sphere of influence in which U.S. intellectual property titles are maintained. This boils down to the idea that in some places, U.S. telecom companies should continue to extract IP rents, instead of Chinese state-owned telecom.

McMaster argues for “strategic empathy”–seeing the world the way the “other” sees it. But a simpler approach might be viewing the world “strategically”–i.e., in terms of incentives and the balance of power in the world. A question facing the U.S. going forward is whether it can make being a tributary of the U.S. intellectual property regime (not to mention debt regime–discussing the history of the IMF is out of scope of this post) more compelling than being a tributary of the Chinese state. For that to work, it may need to get better clarity about its own ideological interests, and stop conflating its economic incentives with moralistic flappery.

Tech Law and Political Economy

It has been an intellectually exciting semester at NYU’s Information Law Institute and regular, more open research meeting, the Privacy Research Group. More than ever in my experience, we’ve been developing a clarity about the political economy of technology together. I am especially grateful to my colleagues Aaron Shapiro, Salome Viljoen, and Jake Goldenfein for introducing me to a lot of very enlightening new literature. This blog post summarizes what I’ve recently been exposed to via these discussions.

  • Perhaps kicking off the recent shift in thinking about law and political economy is the long-time-coming publication of Julie Cohen’s book, Between Truth and Power. While many of the arguments have been available in article form for some time, the book gives these arguments more gravitas, and enabled Cohen to do a bit of a speaking tour in the NYC area some months ago. Having a heavy-hitter in the field deliver such authoritative and incisive analysis has been, in my opinion, empowering to my generation of scholars whose critical views have not enjoyed the same legitimacy. Exposure to this has sent my own work in a new direction recently.
  • In a complementary move inspired perhaps by the political climate around the Democratic primary, the ILI group has been getting acquainted with the Law and Political Economy (LPE) field/attitude/blog. Perhaps best described as a left wing, institutionalist legal realist school of thought, the position is articulated in the referenced article by Britton-Purdy et al. (2020), in this manifesto, and more broadly on this blog. The mastermind of the movement is apparently Amy Kapczynski, but there are many fellow travelers–some internet luminaries, some very approachable colleagues. The tent seems inclusive.
  • LPE is, of course, a response to and play on “Law and Economics”, the once-dominant field of legal scholarship that legitimized so much neoliberal policy-making. What is nice about LPE is that beyond being a rehash of “critical” legal attitudes, LPE grounds itself in economic analysis, albeit in a more expansive form of economic understanding that includes social structures that affect, for example, social group inequalities. This creates room for, by providing a policy-oriented audience, heterodox economic views. Jake Goldenfein and I have a paper that we are excited to publish soon, “Data Science and the Decline of Liberal Law and Ethics”, which takes aim at the individualist assumptions of liberal regulatory regimes and their insufficiency in regulating platform companies. I don’t think we had LPE in mind as we wrote that article, but I believe it will be a fresh complementary view. Unfortunately, the conference where we planned to present it has been delayed by COVID.
  • Once the question of the real political economy of technology is raised, it opens up a deep theoretical can of worms that is as far as I can tell fractured across a variety of fields. One major source of confusion here is that Economics itself, as a field, doesn’t seem to have a stable conclusion about the role of technology in the economy. An insightful look into the history of Economics and its inability to correctly categorize technology–especially technology as a facet of capital–can be found in Nitzan (1998). Nitzan elucidates a distinction from Veblen (!) between industry and business: industry aims to produce; business aims to make money. And capitalism, argues Nitzan, winds up ultimately being about the capacity of absentee owners to claim sources of revenue. The distinction between these fields explains why business so often restricts production. As we noted in our ILI discussion, this is immediately relevant to anything digital, because intellectual property is always a way of restricting production in order to make a source of revenue.
  • I take a somewhat more balanced view myself, seeing an economy with more than one kind of capital in it. I’m fairly Bourdieusian in this way. On this point, I’ve had recommended to me Sadowski’s (2019) article that explicitly draws the line from Marx to Bourdieu and connects it with the contemporary digital economy. This is on a new short list for me.

References

Benthall, S, and Goldenfein, J., forthcoming. Data Science and the Decline of Liberal Law and Ethics. Ethics of Data Science Conference 2020.

Britton-Purdy, J.S., Grewal, D.S., Kapczynski, A. and Rahman, K.S., 2020. BUILDING A LAW-AND-POLITICAL-ECONOMY FRAMEWORK: BEYOND THE TWENTIETH-CENTURY SYNTHESIS. Yale Law Journal, Forthcoming.

Nitzan, J., 1998. Differential accumulation: towards a new political economy of capital. Review of international political economy5(2), pp.169-216.

Sadowski, J., 2019. When data is capital: Datafication, accumulation, and extraction. Big Data & Society6(1), p.2053951718820549.

Internet service providers are utilities

On Sunday, New York State is closing all non-essential brick-and-mortar businesses and ordering all workforce who are able to work from home. Zoom meetings from home are now the norm for people working for both the private sector and government.

One might reasonably want to know whether the internet service providers (ISP) are operating normally during this period. I had occasion to call up Optimum yesterday and ask. I was told, very helpfully, “Were doing business as usual because we are like a utility.”

It’s quite clear that the present humane and responsible approach to COVID-19 depends on broad and uninterrupted access to the Internet to homes. The government and businesses would cease to function without it. Zoom meetings are performing the role that simple audio telephony once did. And executive governments are recognizing this as they use their emergency powers.

There has been a strain of “technology policy” thought that some parts of “the tech sector” should be regulated as utilities. In 2015, the FCC reclassified broadband access as a utility as part of their Net Neutrality decision. In 2018, this position was reversed. This was broadly seen as a win for the telecom companies.

One plausible political consequence of COVID-19 is the reconsideration of the question of whether ISPs are utilities or not. They are.

A brief revisit of the Habermas/Luhmann debate

I’ve gotten into some arguments with friends recently about the philosophy of science. I’m also finding myself working these days, yet again, at a disciplinary problem. By which I mean, the primary difficult of the research questions and methodologies I’m asking at the moment is that there is no discipline that in its primary self-understanding asks those questions or uses those methodologies.

This and the coronavirus emergency have got me thinking, “What ever happened to the Habermas/Luhmann debate?” It is a good time to consider this problem because it’s one that’s likely to minimize my interactions with other people at a time when this one’s civic duty.

I refer to Rasch (1991) for an account of it. Here is a good paragraph summarizing some of the substance of the debate.

It is perhaps in this way that Luhmann can best be distinguished from Habermas. The whole movement of Habermas’s thought tends to some final resting place, prescriptively in the form of consensus as the legitimate basis for social order, and methodologically in the form of a normative underlying simple structure which is said to dictate the proper shape of surface complexity. But for Luhmann, complexity does not register the limits of human knowledge as if those limits could be overcome or compensated for by the reconstruction of some universal rule-making process. Rather, complexity, defined as the paradoxical task of solving a solved problem that cannot be solved, or only provisionally solved, or only solved by creating new problems, is the necessary ingredient for human intellectual endeavors. Complexity always remains complex and serves as a self-replenishing reservoir of possibilities (1981, 203-4). Simply put, complexity is limited understanding. It is the missing information which makes it impossible to comprehend a system fully (1985, 50-51; 1990, 81), but the absence of that information is absolutely unavoidable and paradoxically essential for the further evolution of complexity.

Rasch, 1991

In other words, Habermas believes that it’s possible, in principle, to reach a consensus around social order that is self-legitimizing and has at its core a simple, even empty, observer’s stance. This is accomplished through rational communicative action. Luhmann, on the other hand, sees the fun-house of perspectivalist warped mirrors and no such fixed point or epistemological attractor state.

But there’s another side to this debate which is not discussed so much in the same context. Habermas, by positing a communicative rationality capable of legitimization, is able to identify the obstacles to it: the “steering media”, money and power (Habermas, 1987). Whereas Luhmann understands a “social system” to be constituted by the communication within it. A social system is defined as the sum total of its speech, writing, and so on.

This has political implications. Rasch concludes:

With that in mind, one final paradox needs to be mentioned. Although Habermas is the self-identified leftist and social critic, and although Habermas sees in Luhmann and in systems theory a form of functionalist conservatism, it may very well be to Luhmann that future radical theorists will have to turn. Social and political theorists who are socially and politically committed need not continue to take theoretical concern with complexity as a sign of apathy, resignation, or conformism.’9 As Harlan Wilson notes, the “invocation of ‘complexity’ for the purpose of devaluing general political and social theory and of creating suspicion of all varieties of general political theory in contemporary political studies is to be resisted.” It is true that the increased consciousness of complexity brings along with it the realization that “total comprehension” and “absence of distortion” are unattainable, but, Wilson continues, “when that has been admitted, it remains that only general theoretical reflection, together with a sense of history, enables us to think through the meaning of our complex social world in a systematic way” (1975, 331). The only caveat is that such “thinking through” will have to be done on the level of complexity itself and will have to recognize that theories of social complexity are part of the social complexity they investigate. It is in this way that the ability to respond to social complexity in a complex manner will continue to evolve along with the social complexity that theory tries to understand

Rasch, 1991

One reason that Habermas is able to make a left-wing critique, whereas Luhmann can correctly be accused of being a functionalist conservative, is that Habermas’s normative stance has an irrational materialist order (perhaps what is “right wing” today) as its counterpoint. Whereas Luhmann, in asserting that social systems exist only as functional stability, does not seem to have money, power, or ultimately the violence they depend on in his ontology. It is a conservative view not because his theory lacks normativity, but because his descriptive stance is, at the end of the day, incomplete. Luhmann has no way of reckoning with the ways infrastructural power (Mann, 2008) exerts a passive external force on social systems. In other words, social systems evolve, but in an environment created by the material consequences of prior social systems, which reveal themselves as distributions of capital. This is what it means to be in the Anthropocene.

During a infrastructural crisis, such as a global pandemic in which the violence of nature threatens objectified human labor and the material supply chains that depend on it, society, often in times of “peace” happy to defer to “cultural” experts whose responsibility is the maintenance of ideology, defers to experts in different experts: the epidemiologists, the operations research experts, the financial analysts. These are the occupational “social scientists” who have no need of the defensiveness of the historian, the sociologist, the anthropologist, or the political scientist. They are deployed, sometimes in the public interest, to act on their operationally valid scientific consensus. And precisely because the systems that concern them are invisible to the naked eye (microbes, social structure, probabilities) the uncompromising, atheoretical empiricism that has come to be the proud last stand of the social sciences cannot suffice. Here, theory–an accomplishment of rationality, its response to materialist power–must shine.

The question, as always, is not whether there can be progress based on a rational simplification, but to what extent and economy supports the institutions that create and sustain such a perspective, expertise, and enterprise.

References

Habermas, Jürgen. “The theory of communicative action, Volume 2: Lifeworld and system.” Polity, Cambridge (1987).

Mann, Michael. “Infrastructural power revisited.” Studies in comparative international development 43.3-4 (2008): 355.

Rasch, William. “Theories of complexity, complexities of theory: Habermas, Luhmann, and the study of social systems.” German Studies Review 14.1 (1991): 65-83.

A note towards formal modeling of informational capitalism

Cohen’s Between Truth and Power (2019) is enormously clarifying on all issues of the politics of AI, etc.

“The data refinery is only secondarily an apparatus for producing knowledge; it is principally an apparatus for producing wealth.”

– Julie Cohen, Between Truth and Power, 2019

Cohen lays out the logic of informational capitalism in comprehensive detail. Among her authoritatively argued points is that scholarly consideration of platforms, privacy, data science, etc. has focused on the scientific and technical accomplishments undergirding the new information economy, but that really its key institutions, the platform and the data refinery, are first and foremost legal and economic institutions. They exist as businesses; they are designed to “extract surplus”.

I am deeply sympathetic to this view. I’ve argued before that the ethical and political questions around AI are best looked at by considering computational institutions (1, 2). I think getting to the heart of the economic logic is the best way to understand the political and moral concerns raised by information capitalism. Many have argued that there is something institutionally amiss about informational capitalism (e.g. Strandburg, 2013); a recent CfP went so far as to say that the current market for data and AI is not “functional or sustainable.”

As far as I’m concerned, Cohen (2019) is the new gold standard for qualitative analysis of these issues. It is thorough. It is, as far as I can tell, correct. It is a dense and formidable work; I’m not through it yet. So while it may contain all the answers, I haven’t read them yet. This leaves me free to continue to think about how I would go about solving them.

My perspective is this: it will require social scientific progress to crack the right institutional design to settle informational capitalism in a satisfying way. Because computation is really at the heart of the activity of economic institutions, computation will need to be included within the social scientific models in question. But this is not something particularly new; rather, it’s implicitly already how things are done in many “hard” social science disciplines. Epstein (2006) draws the connections between classical game theoretic modeling and agent-based simulation, arguing that “The Computer is not the point”: rather, the point is that the models are defined in terms of mathematical equations, which are by foundational laws of computing amenable to being simulated or solved through computation. Hence, we have already seen a convergence of methods from “AI” into computational economics (Carroll, 2006) and sociology (Castelfranchi, 2001).

This position is entirely consistent with Abebe et al.’s analysis of “roles for computing in social change” (2020). In that paper, the authors are concerned with “social problems of justice and equity”, loosely defined, which can be potentially be addressed through “social change”. They defend the use of technical analysis and modeling as playing a positive role even according to the politics the Fairness, Accountability, and Transparency research community, which are particular. Abebe et al. address backlashes against uses of formalism such as that of Selbst et al. (2019); this rebuttal was necessary given the disciplinary fraughtness of the tech policy discourse.

What I am proposing in this note is something ever so slightly different. First, I am aiming at a different political problematic than the “social problems of justice and equity”. I’m trying to address the economic problems raised by Cohen’s analysis, such as the dysfunctionality of the data market. Second, I’d like to distinguish between “computing” in the method of solving mathematical model equations and “computing” as an element of the object of study, the computational institution (or platform, or data refinery, etc.) Indeed, it is the wonder and power of computation that it is possible to model one computational process within another. This point may be confusing for lawyers and anthropologists, but it should be clear to computational social scientists when we are talking about one or other, though our scientific language has not settled on a lexicon for this yet.

The next step for my own research here is to draw up a mathematical description of informational capitalism, or the stylized facts about it implied by Cohen’s arguments. This is made paradoxically both easier and more difficult by the fact that much of this work has already been done. A simple search of literature on “search costs”, “network effects”, “switching costs”, and so on, brings up a lot of fine work. The economists have not been asleep all this time. But then why has it taken so long for the policy critiques around informational capitalism, including those around informational capitalism and algorithmic opacity, to emerge?

I have two conflicting hypotheses, one quite gloomy and the other exciting. The gloomy view is that I’m simply in the wrong conversation. The correct conversation, the one that has adequately captured the nuances of the data economy already, is elsewhere–maybe in an economics conference in Zurich or something, and this discursive field of lawyers and computer scientists and ethicists is just effectively twiddling its thumbs and working on poorly framed problems because it hasn’t and can’t catch up with the other discourse.

The exciting view is that the problem of synthesizing the fragments of a solution from the various economists literatures with the most insight legal analyses is an unsolved problem ripe for attention.

Edit: It took me a few days, but I’ve found the correct conversation. It is Ross Anderson’s Workshop on Economics and Information Security. That makes perfect sense: Ross Anderson is a brilliant thinker in that arena. Naturally, as one finds, all the major results in this space are 10-20 years old. Quite probably, if I had found this one web page a couple years ago, my dissertation would have been written much differently–not so amateurishly.

It is supremely ironic to me how, in an economy characterized by a reduction in search costs, the search for the answers I’ve been looking for in information economics has been so costly for me.

References

Abebe, R., Barocas, S., Kleinberg, J., Levy, K., Raghavan, M., & Robinson, D. G. (2020, January). Roles for computing in social change. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 252-260).

Castelfranchi, C. (2001). The theory of social functions: challenges for computational social science and multi-agent learning. Cognitive Systems Research2(1), 5-38.

Carroll, C. D. (2006). The method of endogenous gridpoints for solving dynamic stochastic optimization problems. Economics letters91(3), 312-320.

Cohen, J. E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford University Press, USA.

Epstein, Joshua M. Generative social science: Studies in agent-based computational modeling. Princeton University Press, 2006.

Fraser, N. (2017). The end of progressive neoliberalism. Dissent2(1), 2017.

Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019, January). Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 59-68).

Strandburg, K. J. (2013). Free fall: The online market’s consumer preference disconnect. U. Chi. Legal F., 95.