Digifesto

Category: politics

Contradictions in Freedom: the U.S. / China information ideology divide

Reflecting on H.R. McMaster’s How China Sees the World essay about the worldview of China’s government and how it is at odds with U.S. culture and interests, I am struck by how much of these tensions are about information ideology. By information ideology, I mean “information ethics”, but applied to legitimize state power.

I certainly don’t claim any expertise on the subject of China–I’ve never been there! But McMaster’s argument, as written, is revealing. McMaster is pointing the ambiguity of China’s position: it is both ambitious and insecure. But it is just as revealing of the contradictions in U.S. information ideology as it is of the CCP’s political ambitions.

The distinctions McMaster draws between China and the U.S. are familiar. Rather than become “more like the West” as it modernizes, China is developing and building a different model. McMaster identifies several features of Chinese internal and foreign policy, which he claims is inspired by a historical period in which China was a major world power able to exact tribute from less powerful states.

  • Suppression of internal dissent–include Tibet, and religious groups.
  • Creation of a surveillance apparatus.
  • Aligning the ideology taught in the universities with the state’s ideological interest.
  • An economic policy geared towards extracting “tribute”–which is another way of saying that they are trying to capture surplus. The economic policies include:
    • “Made in China 2025” — becoming a science and technology leader. McMaster criticizes the part of this policy which involves forced technology transfer for foreign firms trying to access the Chinese market.
    • The “Belt and Road Initiative”: lending money to other countries for infrastructure improvements, which then means clientele nations are debtors.
    • “Military-Civil Fusion” — All citizens and organizations are part of the state intelligence system. This means that Chinese companies and researchers, even when acquiring and researching at foreign companies or universities, are encouraged to feed technology back up to the state.

McMaster’s critique of China, then, starts with human rights abuses but settles on the problem of “cybertheft”–the transfer of technology to the Chinese state from U.S. funded research labs and companies.

This transfer is both militarily and economically significant. From the perspective of a self-interested U.S. policy, these criticisms are alarming. But the blending of the human rights moralizing with the economic complaint is revelatory of McMaster’s own information ideology. The writing blends the human rights interests of individuals and the economic interests of large corporations as if this were a seamless logical transition. In reality, this is not a coherent line of reasoning.

Chinese espionage is successful in part because the party is able to induce cooperation, wittingly or unwittingly, from individuals, companies, and political leaders. Companies in the United States and other free-market economies often do not report theft of their technology, because they are afraid of losing access to the Chinese market, harming relationships with customers, or prompting federal investigations.

Here, for example, the idea that Chinese espionage is subversively undermining the will of individuals is blended together with what we must presume is an explicit technology transfer requirement for foreign companies trying to sell to the Chinese market. The first is an Orwellian dystopia. The second is a form of overt trade policy. It is strange that McMaster doesn’t see a bright line of difference between these two ways of doing “espionage”.

The collapsing of American information ideology is even clearer in McMaster’s articulation of “Western liberal” strengths. Putting aside whether, as Goldsmith and Woods have recently argued, U.S. content moderation strategies are looking more like Chinese ones all the time, there is something dubious about McMaster’s appeal to the perhaps greatest of U.S. freedoms, the freedom of speech, given his preceding argument:

For one thing, those “Western liberal” qualities that the Chinese see as weaknesses are actually strengths. The free exchange of information and ideas is an extraordinary competitive advantage, a great engine of innovation and prosperity. (One reason Taiwan is seen as such a threat to the People’s Republic is because it provides a small-scale yet powerful example of a successful political and economic system that is free and open rather than autocratic and closed.) Freedom of the press and freedom of expression, combined with robust application of the rule of law, have exposed China’s predatory business tactics in country after country—and shown China to be an untrustworthy partner. Diversity and tolerance in free and open societies can be unruly, but they reflect our most basic human aspirations—and they make practical sense too. Many Chinese Americans who remained in the United States after the Tiananmen Square massacre were at the forefront of innovation in Silicon Valley.

It is ironic that given McMaster’s core criticism of China is its effectiveness as causing information and ideas to flow into its security apparatus for the sake of its prosperity, he chooses to highlight freedom of expression as the key to U.S. and liberal innovation. While I personally agree that “freedom of expression” is good for science and innovation, McMaster apparently doesn’t see how limiting technology transfer is itself a limitation on the freedom of exchange of information.

McMaster uses the term “rule of law” here to mean primarily, it would seem, the enforcement of intellectual property rights. However, some of the cases he raises as problematic are those where a corporation trades access to IP in return for market access. This could be seen as a violation of IP. But it might be more productive to view it more objectively as a trade–perhaps a trade that in the long run is not in the interest of the U.S. security state, but one that many private companies willingly engaged in. Elsewhere, McMaster points to the technology transfer via Chinese researchers from U.S. funded university research labs. While upsetting the geopolitical balance of power, there are many who think that this is actually how university research labs are supposed to work. Science is at its best with “freedom”, with public results, in part because it is the exposure to public criticism by the international community of scientists that gives its results legitimacy.

Viewed from the perspective of open scientific cooperation, McMaster’s main complaint against China boils down to the idea that it is free-riding, in the economic sense, on U.S. investments in science and technology. This is irksome but also in a real sense how scientific progress is supposed to go. McMaster’s recommendations amount to economic and intellectual sanctioning of China: excluding its companies from the stock market, and punishing U.S. companies that knowingly aid in China’s human rights abuses. However well-motivated these ideas, they don’t resolve the core problem at the heart of these relations.

That problem is this: the U.S.’s international leadership has involved, in part, its enforcement of intellectual property rights. These intellectual property rights have allowed U.S. companies to extract rents and have prevented other countries from developing competitive militaries. U.S. technological supremacy has, among other things, made the U.S. an effective exporter of military technology. But this export trade only works if other countries cannot reverse engineer the technology. In some cases, they have been prevented in doing this by “rule of law”–U.S. led international law–but not that soft power is fading.

So McMaster’s policy recommendations are an attempt to carve out a separate sphere of influence in which U.S. intellectual property titles are maintained. This boils down to the idea that in some places, U.S. telecom companies should continue to extract IP rents, instead of Chinese state-owned telecom.

McMaster argues for “strategic empathy”–seeing the world the way the “other” sees it. But a simpler approach might be viewing the world “strategically”–i.e., in terms of incentives and the balance of power in the world. A question facing the U.S. going forward is whether it can make being a tributary of the U.S. intellectual property regime (not to mention debt regime–discussing the history of the IMF is out of scope of this post) more compelling than being a tributary of the Chinese state. For that to work, it may need to get better clarity about its own ideological interests, and stop conflating its economic incentives with moralistic flappery.

Tech Law and Political Economy

It has been an intellectually exciting semester at NYU’s Information Law Institute and regular, more open research meeting, the Privacy Research Group. More than ever in my experience, we’ve been developing a clarity about the political economy of technology together. I am especially grateful to my colleagues Aaron Shapiro, Salome Viljoen, and Jake Goldenfein for introducing me to a lot of very enlightening new literature. This blog post summarizes what I’ve recently been exposed to via these discussions.

  • Perhaps kicking off the recent shift in thinking about law and political economy is the long-time-coming publication of Julie Cohen’s book, Between Truth and Power. While many of the arguments have been available in article form for some time, the book gives these arguments more gravitas, and enabled Cohen to do a bit of a speaking tour in the NYC area some months ago. Having a heavy-hitter in the field deliver such authoritative and incisive analysis has been, in my opinion, empowering to my generation of scholars whose critical views have not enjoyed the same legitimacy. Exposure to this has sent my own work in a new direction recently.
  • In a complementary move inspired perhaps by the political climate around the Democratic primary, the ILI group has been getting acquainted with the Law and Political Economy (LPE) field/attitude/blog. Perhaps best described as a left wing, institutionalist legal realist school of thought, the position is articulated in the referenced article by Britton-Purdy et al. (2020), in this manifesto, and more broadly on this blog. The mastermind of the movement is apparently Amy Kapczynski, but there are many fellow travelers–some internet luminaries, some very approachable colleagues. The tent seems inclusive.
  • LPE is, of course, a response to and play on “Law and Economics”, the once-dominant field of legal scholarship that legitimized so much neoliberal policy-making. What is nice about LPE is that beyond being a rehash of “critical” legal attitudes, LPE grounds itself in economic analysis, albeit in a more expansive form of economic understanding that includes social structures that affect, for example, social group inequalities. This creates room for, by providing a policy-oriented audience, heterodox economic views. Jake Goldenfein and I have a paper that we are excited to publish soon, “Data Science and the Decline of Liberal Law and Ethics”, which takes aim at the individualist assumptions of liberal regulatory regimes and their insufficiency in regulating platform companies. I don’t think we had LPE in mind as we wrote that article, but I believe it will be a fresh complementary view. Unfortunately, the conference where we planned to present it has been delayed by COVID.
  • Once the question of the real political economy of technology is raised, it opens up a deep theoretical can of worms that is as far as I can tell fractured across a variety of fields. One major source of confusion here is that Economics itself, as a field, doesn’t seem to have a stable conclusion about the role of technology in the economy. An insightful look into the history of Economics and its inability to correctly categorize technology–especially technology as a facet of capital–can be found in Nitzan (1998). Nitzan elucidates a distinction from Veblen (!) between industry and business: industry aims to produce; business aims to make money. And capitalism, argues Nitzan, winds up ultimately being about the capacity of absentee owners to claim sources of revenue. The distinction between these fields explains why business so often restricts production. As we noted in our ILI discussion, this is immediately relevant to anything digital, because intellectual property is always a way of restricting production in order to make a source of revenue.
  • I take a somewhat more balanced view myself, seeing an economy with more than one kind of capital in it. I’m fairly Bourdieusian in this way. On this point, I’ve had recommended to me Sadowski’s (2019) article that explicitly draws the line from Marx to Bourdieu and connects it with the contemporary digital economy. This is on a new short list for me.

References

Benthall, S, and Goldenfein, J., forthcoming. Data Science and the Decline of Liberal Law and Ethics. Ethics of Data Science Conference 2020.

Britton-Purdy, J.S., Grewal, D.S., Kapczynski, A. and Rahman, K.S., 2020. BUILDING A LAW-AND-POLITICAL-ECONOMY FRAMEWORK: BEYOND THE TWENTIETH-CENTURY SYNTHESIS. Yale Law Journal, Forthcoming.

Nitzan, J., 1998. Differential accumulation: towards a new political economy of capital. Review of international political economy5(2), pp.169-216.

Sadowski, J., 2019. When data is capital: Datafication, accumulation, and extraction. Big Data & Society6(1), p.2053951718820549.

Big tech surveillance and human rights

I’ve been alarmed by two articles to cross my radar today.

  • Bloomberg Law has given a roundup on the contributions Google and Facebook have given to tech policy advocacy groups. Long story short: they give a lot of money, and while these groups say they are not influenced by the donations, they tend to favor privacy policies that do not interfere with the business models of these Big Tech companies.
  • Amnesty International has put out a report arguing that the business models of Google and Facebook are “an unprecedented danger to human rights”.

Surveillance Giants lays out how the surveillance-based business model of Facebook and Google is inherently incompatible with the right to privacy and poses a systemic threat to a range of other rights including freedom of opinion and expression, freedom of thought, and the right to equality and non-discrimination.

Amnesty International

Until today, I never had a reason to question the judgment of Amnesty International. I have taken seriously their perspective as an independent watchdog group looking out for human rights. Could it be that Google and Facebook have, all this time, been violating human rights left and right? Have I been a victim of human rights abuses from the social media sites I’ve used since college?

This is a troubling thought, especially as an academic researcher who has invested a great deal of time studying technology policy. While in graduate school, the most lauded technology policy think tanks, those that were considered most prestigious and genuine, such as the Center for Democracy and Technology (CDT), are precisely those listed by the Bloomberg Law article as having been in essence supporting the business models of Google and Facebook all along. Now I’m in moral doubt. Amnesty International has condemned Google of human rights violations for the sake of profit, with CDT (for example) as an ideological mouthpiece.

Elsewhere in my academic work it’s come to light that what is an increasingly popular, arguably increasingly consensus view of technology policy is a direct contradiction of the business model and incentives of companies like Google and Facebook. The other day colleagues and I did a close read of the New York Privacy Act (NYPA), which is not under consideration. New York State’s answer to the CCPA is notable in that it foregrounds Jack Balkin’s notion of an information fiduciary. According to the current draft, data controllers (it uses this EU-inspired language) would have a fiduciary duty to consumers, who are natural persons (but not independent contractors, such as Uber drivers) whose data is being collected. This bill, in its current form, requires that the data controller put its care and responsibility of the consumer over and above its fiduciary duty to its shareholders. Since Google and Facebook are (at least) two-sided markets, with consumers making up only one side, this (if taken seriously) has major implications for how these Big Tech companies operate with respect to New York residents. Arguably, it would require these companies to put the interests of the consumers that are their users ahead of the interests of their real customers, the advertisers–which pay the revenue that goes to shareholders.

If all data controllers were information fiduciaries, that would almost certainly settle the human rights issues raised by Amnesty International. But how likely is this strong language to survive the legislative process in New York?

There are two questions on my mind after considering all this. The first is what the limits of Silicon Valley self-regulation are. I’m reminded of an article by Mulligan and Griffin about Google’s search engine results. For a time, when a user queried “Did the holocaust happen?” the first search results would deny the holocaust. This prompted the Mulligan and Griffin article about what principles could be used to guide search engine behavior besides the ones used to design the search engine initially. Their conclusion is that human rights, as recognized and international experts, could provide those principles.

The essay concludes by offering a way forward grounded in developments in business and human rights. The emerging soft law requirement that businesses respect and remedy human rights violations entangled in their business operations provides a normative basis for rescripting search. The final section of this essay argues that the “right to truth,” increasingly recognized in human rights law as both an individual and collective right in the wake of human rights atrocities, is directly affected by Google and other search engine providers’ search script. Returning accurate information about human rights atrocities— specifically, historical facts established by a court of law or a truth commission established to document and remedy serious and systematic human rights violations—in response to queries about those human rights atrocities would make good on search engine providers’ obligations to respect human rights but keep adjudications of truth with politically legitimate expert decision makers. At the same time, the right to freedom of expression and access to information provides a basis for rejecting many other demands to deviate from the script of search. Thus, the business and human rights framework provides a moral and legal basis for rescripting search and for cabining that rescription.

Mulligan and Griffin, 2018

Google now returns different results when asked “Did the holocaust happen?”. The first hit is the Wikipedia page for “Holocaust denial”, which states clearly that the views of Holocaust deniers are false. The moral case on this issue has been won.

Is it realistic to think that the moral case will be won when the moral case directly contradicts the core business model of these companies? That is perhaps akin to believing that medical insurance companies in the U.S. will cave to moral pressure and change the health care system in recognition of the human right to health.

These are the extreme views available at the moment:

  • Privacy is a human right, and our rights are being trod on by Google and Facebook. The ideology that has enabled this has been propagated by non-profit advocacy groups and educational institutions funded by those companies. The human rights of consumers suffer under unchecked corporate control.
  • Privacy, as imagined by Amnesty International, is not a human right. They have overstated their moral case. Google and Facebook are intelligent consumer services that operate unproblematically in a broad commercial marketplace for web services. There’s nothing to see here, or worry about.

I’m inclined towards the latter view, if only because the “business model as a human rights violation” angle seems to ignore how services like Google and Facebook add value for users. They do this by lowering search costs, which requires personalized search and data collection. There seem to be some necessary trade-offs between lowering search costs broadly–especially search costs when what’s being searched for is people–and autonomy. But unless these complex trade-offs are untangled, the normative case will be unclear and business will proceed simply as usual.

References

Mulligan, D. K., & Griffin, D. (2018). Rescripting Search to Respect the Right to Truth.

Open Source Software (OSS) and regulation by algorithms

It has long been argued that technology, especially built infrastructure, has political implications (Winner, 1980). With the rise of the Internet as the dominating form of technological infrastructure, Lessig (1999), among others, argued that software code is a regulating force parallel to the law. By extension of this argument, we would expect open source software to be a regulating force in society.

This is not the case. There is a lot of open source software and much of it is very important. But there’s no evidence to say that open source software, in and of itself, regulates society except in the narrow sense in that the communities that build and maintain it are necessarily constrained by its technical properties.

On the other hand, there are countless platforms and institutions that deploy open source software as part of their activity, which does have a regulating force on society. The Big Tech companies that are so powerful that they seem to rival some states are largely built on an “open core” of software. Likewise for smaller organizations. OSS is simply part of the the contemporary software production process, and it is ubiquitous.

Most widely used programming languages are open source. Perhaps a good analogy for OSS is that it is a collection of languages, and literatures in those languages. These languages and much of their literature are effectively in the public domain. We might say the same thing about English or Chinese or German or Hindi.

Law, as we know it in the modern state, is a particular expression of language with purposeful meaning. It represents, at its best, a kind of institutional agreement that constraints behavior based on its repetition and appeals to its internal logic. The Rule of Law, as we know it, depends on the social reproduction of this linguistic community. Law Schools are the main means of socializing new lawyers, who are then credentialed to participate in and maintain the system which regulates society. Lawyers are typically good with words, and their practice is in a sense constrained by their language, but only in the widest of Sapir-Whorf senses. Law is constrained more the question of which language is institutionally recognized; indeed, those words and phrases that have been institutionally ratified are the law.

Let’s consider again the generative question of whether law could be written in software code. I will leave aside for a moment whether or not this would be desirable. I will entertain the idea in part because I believe that it is inevitable, because of how the algorithm is the form of modern rationality (Totaro and Ninno, 2014) and the evolutionary power of rationality.

A law written in software would need to be written in a programming language and this would all but entail that it is written on an “open core” of software. Concretely: one might write laws in Python.

The specific code in the software law might or might not be open. There might one day be a secretive authoritarian state with software laws that are not transparent or widely known. Nothing rules that out.

We could imagine a more democratic outcome as well. It would be natural, in a more liberal kind of state, for the software laws to be open on principle. The definitions here become a bit tricky: the designation of “open source software” is one within the schema of copyright and licensing. Could copyright laws and license be written in software? In other words, could the ‘openness’ of the software laws be guaranteed by their own form? This is an interesting puzzle for computer scientists and logicians.

For the sake of argument, suppose that something like this is accomplished. Perhaps it is accomplished merely by tradition: the institution that ratifies software laws publishes these on purpose, in order to facilitate healthy democratic debate about the law.

Even with all this in place, we still don’t have regulation. We have now discussed software legislation, but not software execution and enforcement. If software is only as powerful as the expression of a language. A deployed system, running that software, is an active force in the world. Such a system implicates a great many things beyond the software itself. It requires computers and and networking infrastructure. It requires databases full of data specific to the applications for which its ran.

The software dictates the internal logic by which a system operates. But that logic is only meaningful when coupled with an external societal situation. The membrane between the technical system and the society in which it participates is of fundamental importance to understanding the possibility of technical regulation, just as the membrane between the Rule of Law and society–which we might say includes elections and the courts in the U.S.–is of fundamental importance to understanding the possibility of linguistic regulation.

References

Lessig, L. (1999). Code is law. The Industry Standard18.

Hildebrandt, M. (2015). Smart technologies and the end (s) of law: Novel entanglements of law and technology. Edward Elgar Publishing.

Totaro, P., & Ninno, D. (2014). The concept of algorithm as an interpretative key of modern rationality. Theory, Culture & Society31(4), 29-49.

Winner, L. (1980). Do artifacts have politics?. Daedalus, 121-136.

The diverging philosophical roots of U.S. and E.U. privacy regimes

For those in the privacy scholarship community, there is an awkward truth that European data protection law is going to a different direction from U.S. Federal privacy law. A thorough realpolitical analysis of how the current U.S. regime regarding personal data has been constructed over time to advantage large technology companies can be found in Cohen’s Between Truth and Power (2019). There is, to be sure, a corresponding story to be told about EU data protection law.

Adjacent, somehow, to the operations of political power are the normative arguments leveraged both in the U.S. and in Europe for their respective regimes. Legal scholarship, however remote from actual policy change, remains as a form of moral inquiry. It is possible, still, that through professional training of lawyers and policy-makers, some form of ethical imperative can take root. Democratic interventions into the operations of power, while unlikely, are still in principle possible: but only if education stays true to principle and does not succumb to mere ideology.

This is not easy for educational institutions to accomplish. Higher education certainly is vulnerable to politics. A stark example of this was the purging of Marxist intellectuals from American academic institutions under McCarthyism. Intellectual diversity in the United States has suffered ever since. However, this was only possible because Marxism as a philosophical movement is extraneous to the legal structure of the United States. It was never embedded at a legal level in U.S. institutions.

There is a simply historical reason for this. The U.S. legal system was founded under a different set of philosophical principles; that philosophical lineage still impacts us today. The Founding Fathers were primarily influenced by John Locke. Locke rose to prominence in Britain when the Whigs, a new bourgeois class of Parliamentarian merchant leaders, rose to power, contesting the earlier monarchy. Locke’s political contributions were a treatise pointing out the absurdity of the Divine Right of Kings, the prevailing political ideology of the time, and a second treatise arguing for a natural right to property based on the appropriation of nature. This latter political philosophy was very well aligned with Britain’s new national project of colonialist expansion. With the founding of the United States, it was enshrined into the Constitution. The liberal system of rights that we enjoy in the U.S. are founded in the Lockean tradition.

Intellectual progress in Europe did not halt with Locke. Locke’s ideas were taken up by David Hume, whose introduced arguments that were so agitating that they famously woke Immanuel Kant, in Germany, from his “dogmatic slumber”, leading him to develop a new highly systematic system of morality and epistemology. Among the innovations in this work was the idea that human freedom is grounded in the dignity of being an autonomous person. The source of dignity is not based in a natural process such as the tilling of land. It is rather based in on transcendental facts about what it means to be human. The key to morality is treating people like ends, not means; in other words, not using people as tools to other aims, but as aims in themselves.

If this sound overly lofty to an American audience, it’s because this philosophical tradition has never taken hold in American education. In both the United Kingdom and Britain, Kantian philosophy has always been outside the mainstream. The tradition of Locke, through Hume, has continued on in what philosophers will call “analytic philosophy”. This philosophy has taken on the empiricist view that the only source of knowledge is individual experience. It has transformed over centuries but continues to orbit around the individual and their rights, grounded in pragmatic considerations, and learning normative rules using the case-by-case approach of Common Law.

From Kant, a different “continental philosophy” tradition produced Hegel, who produced Marx. We can trace from Kant’s original arguments about how morality is based on the transcendental dignity of the individual to the moralistic critique that Marx made against capitalism. Capitalism, Marx argued, impugns the dignity of labor because it treats it like a means, not an end. No such argument could take root in a Lockean system, because Lockean ethics has no such prescription against treating others instrumentally.

Germany lost its way at the start of the 20th century. But the post-war regime, funded by the Marshall plan, directed by U.S. constitutional scholars as well as repatriating German intellectuals, had the opportunity to rewrite their system of governance. They did so along Kantian lines: with statutory law, reflecting a priori rational inquiry, instead of empiricist Common Law. They were able to enshrine into their system the Kantian basis of ethics, with its focus on autonomy.

Many of the intellectuals influencing the creation of the new German state were “Marxist” in the loose sense that they were educated in the German continental intellectual tradition which, at that time, included Marx as one of its key figures. By the mid-20th century they had naturally surpassed this ideological view. However, as a consequence, the McCarthyist attack on Marxism had the effect of also purging some of the philosophical connection between German and U.S. legal education. Kantian notions of autonomy are still quite foreign to American jurisprudence. Legal arguments in the United States draw instead on a vast collection of other tools based on a much older and more piecemeal way of establishing rights. But are any of these tools up to the task of protecting human dignity?

The EU is very much influenced by Germany and the German legal system. The EU has the Kantian autonomy ethic at the heart of its conception of human rights. This philosophical commitment has recently expressed itself in the EU’s assertion of data protection law through the GDPR, whose transnational enforcement clauses have brought this centuries-old philosophical fight into contemporary legal debate in legal jurisdictions that predate the neo-Kantian legal innovations of Continental states.

The puzzle facing American legal scholars is this: while industrial advocates and representatives tend to disagree with the strength of the GDPR, arguing that it is unworkable and/or based on poorly defined principle, the data protections that it offer seem so far to be compelling to users, and the shifting expectations around privacy in part induced by it are having effects on democratic outcomes (such as the CCPA). American legal scholars now have to try to make sense of the GDPR’s rules and find a normative basis for them. How can these expansive ideas of data protection, which some have had the audacity to argue is a new right (Hildebrandt, 2015), be grafted onto the the Common Law, empiricist legal system in a way that gives it the legitimacy of being an authentically American project? Is there a way to explain data protection law that does not require the transcendental philosophical apparatus which, if adopted, would force the American mind to reconsider in a fundamental way the relationship between individuals and the collective, labor and capital, and other cornerstones of American ideology?

There may or may not be. Time will tell. My own view is that the corporate powers, which flourished under the Lockean judicial system because of the weaknesses in that philosophical model of the individual and her rights, will instinctively fight what is in fact a threatening conception of the person as autonomous by virtue of their transcendental similarity with other people. American corporate power will not bother to make a philosophical case at all; it will operate in the domain of realpolitic so well documented by Cohen. Even if this is so, it is notable that so much intellectual and economic energy is now being exerted in the friction around a poweful an idea.

References

Cohen, J. E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford University Press, USA.

Hildebrandt, M. (2015). Smart technologies and the end (s) of law: Novel entanglements of law and technology. Edward Elgar Publishing.

Reading O’Neil’s Weapons of Math Destruction

I probably should have already read Cathy O’Neil’s Weapons of Math Destruction. It was a blockbuster of the tech/algorithmic ethics discussion. It’s written by an accomplished mathematician, which I admire. I’ve also now seen O’Neil perform bluegrass music twice in New York City and think her band is great. At last I’ve found a copy and have started to dig in.

On the other hand, as is probably clear from other blog posts, I have a hard time swallowing a lot of the gloomy political work that puts the role of algorithms in society in such a negative light. I encounter is very frequently, and every time feel that some misunderstanding must have happened; something seems off.

It’s very clear that O’Neil can’t be accused of mathophobia or not understanding the complexity of the algorithms at play, which is an easy way to throw doubt on the arguments of some technology critics. Yet perhaps because it’s a popular book and not an academic work of Science and Technology Studies, I haven’t it’s arguments parsed through and analyzed in much depth.

This is a start. These are my notes on the introduction.

O’Neil describes the turning point in her career where she soured on math. After being an academic mathematician for some time, O’Neil went to work as a quantitative analyst for D.E. Shaw. She saw it as an opportunity to work in a global laboratory. But then the 2008 financial crisis made her see things differently.

The crash made it all too clear that mathematics, once my refuge, was not only deeply entangled in the world’s problems but also fueling many of them. The housing crisis, the collapse of major financial institutions, the rise of unemployment–all had been aided and abetted by mathematicians wielding magic formulas. What’s more, thanks to the extraordinary powers that I loved so much, math was able to combine with technology to multiply the chaos and misfortune, adding efficiency and scale to systems I now recognized as flawed.

O’Neil, Weapons of Math Destruction, p.2

As an independent reference on the causes of the 2008 financial crisis, which of course has been a hotly debated and disputed topic, I point to Sassen’s 2017 “Predatory Formations” article. Indeed, the systems that developed the sub-prime mortgage market were complex, opaque, and hard to regulate. Something went seriously wrong there.

But was it mathematics that was the problem? This is where I get hung up. I don’t understand the mindset that would attribute a crisis in the financial system to the use of abstract, logical, rigorous thinking. Consider the fact that there would not have been a financial crisis if there had not been a functional financial services system in the first place. Getting a mortgage and paying them off, and the systems that allow this to happen, all require mathematics to function. When these systems operate normally, they are taken for granted. When they suffer a crisis, when the system fails, the mathematics takes the blame. But a system can’t suffer a crisis if it didn’t start working rather well in the first place–otherwise, nobody would depend on it. Meanwhile, the regulatory reaction to the 2008 financial crisis required, of course, more mathematicians working to prevent the same thing from happening again.

So in this case (and I believe others) the question can’t be, whether mathematics, but rather which mathematics. It is so sad to me that these two questions get conflated.

O’Neil goes on to describe a case where an algorithm results in a teacher losing her job for not adding enough value to her students one year. An analysis makes a good case that the cause of her students’ scores not going up is that in the previous year, the students’ scores were inflated by teachers cheating the system. This argument was not consider conclusive enough to change the administrative decision.

Do you see the paradox? An algorithm processes a slew of statistics and comes up with a probability that a certain person might be a bad hire, a risky borrower, a terrorist, or a miserable teacher. That probability is distilled into a score, which can turn someone’s life upside down. And yet when the person fights back, “suggestive” countervailing evidence simply won’t cut it. The case must be ironclad. The human victims of WMDs, we’ll see time and again, are held to a far higher standard of evidence than the algorithms themselves.

O’Neil, WMD, p.10

Now this is a fascinating point, and one that I don’t think has been taken up enough in the critical algorithms literature. It resonates with a point that came up earlier, that traditional collective human decision making is often driven by agreement on narratives, whereas automated decisions can be a qualitatively different kind of collective action because they can make judgments based on probabilistic judgments.

I have to wonder what O’Neil would argue the solution to this problem is. From her rhetoric, it seems like her recommendation must be prevent automated decisions from making probabilistic judgments. In other words, one could raise the evidenciary standard for algorithms so that they we equal to the standards that people use with each other.

That’s an interesting proposal. I’m not sure what the effects of it would be. I expect that the result would be lower expected values of whatever target was being optimized for, since the system would not be able to “take bets” below a certain level of confidence. One wonders if this would be a more or less arbitrary system.

Sadly, in order to evaluate this proposal seriously, one would have to employ mathematics. Which is, in O’Neil’s rhetoric, a form of evil magic. So, perhaps it’s best not to try.

O’Neil attributes the problems of WMD’s to the incentives of the data scientists building the systems. Maybe they know that their work effects people, especially the poor, in negative ways. But they don’t care.

But as a rule, the people running the WMD’s don’t dwell on these errors. Their feedback is money, which is also their incentive. Their systems are engineered to gobble up more data fine-tune their analytics so that more money will pour in. Investors, of course, feast on these returns and shower WMD companies with more money.

O’Neil, WMD, p.13

Calling out greed as the problem is effective and true in a lot of cases. I’ve argued myself that the real root of the technology ethics problem is capitalism: the way investors drive what products get made and deployed. This is a worthwhile point to make and one that doesn’t get made enough.

But the logical implications of this argument are off. Suppose it is true that “as a rule”, the makers of algorithms that do harm are made by people responding to the incentives of private capital. (IF harmful algorithm, THEN private capital created it.) That does not mean that there can’t be good algorithms as well, such as those created in the public sector. In other words, there are algorithms that are not WMDs.

So the insight here has to be that private capital investment corrupts the process of designing algorithms, making them harmful. One could easily make the case that private capital investment corrupts and makes harmful many things that are not algorithmic as well. For example, the historic trans-Atlantic slave trade was a terribly evil manifestation of capitalism. It did not, as far as I know, depend on modern day computer science.

Capitalism here looks to be the root of all evil. The fact that companies are using mathematics is merely incidental. And O’Neil should know that!

Here’s what I find so frustrating about this line of argument. Mathematical literacy is critical for understanding what’s going on with these systems and how to improve society. O’Neil certainly has this literacy. But there are many people who don’t have it. There is a power disparity there which is uncomfortable for everybody. But while O’Neil is admirably raising awareness about how these kinds of technical systems can and do go wrong, the single-minded focus and framing risks giving people the wrong idea that these intellectual tools are always bad or dangerous. That is not a solution to anything, in my view. Ignorance is never more ethical than education. But there is an enormous appetite among ignorant people for being told that it is so.

References

O’Neil, Cathy. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books, 2017.

Sassen, Saskia. “Predatory Formations Dressed in Wall Street Suits and Algorithmic Math.” Science, Technology and Society22.1 (2017): 6-20.

State regulation and/or corporate self-regulation

The dust from the recent debates about whether regulation or industrial self-regulation in the data/tech/AI industry appears to be settling. The smart money is on regulation and self-regulation being complementary for attaining the goal of an industry dominated by responsible actors. This trajectory leads to centralized corporate power that is lead from the top; it is a Hamiltonian not Jeffersonian solution, in Pasquale’s terms.

I am personally not inclined towards this solution. But I have been convinced to see it differently after a conversation today about environmentally sustainable supply chains in food manufacturing. Nestle, for example, has been internally changing its sourcing practices to more sustainable chocolate. It’s able to finance this change from its profits, and when it does change its internal policy, it operates on a scale that’s meaningful. It is able to make this transition in part because non-profits, NGO’s, and farmers cooperatives lay through groundwork for sustainable sourcing external to the company. This lowers the barriers to having Nestle switch over to new sources–they have already been subsidized through philanthropy and international aid investments.

Supply chain decisions, ‘make-or-buy’ decisions, are the heart of transaction cost economics (TCE) and critical to the constitution of institutions in general. What this story about sustainable sourcing tells us is that the configuration of private, public, and civil society institutions is complex, and that there are prospects for agency and change in the reconfiguration of those relationships. This is no different in the ‘tech sector’.

However, this theory of economic and political change is not popular; it does not have broad intellectual or media appeal. Why?

One reason may be because while it is a critical part of social structure, much of the supply chain is in the private sector, and hence is opaque. This is not a matter of transparency or interpretability of algorithms. This is about the fact that private institutions, by virtue of being ‘private’, do not have to report everything that they do and, probably, shouldn’t. But since so much of what is done by the massive private sector is of public import, there’s a danger of the privatization of public functions.

Another reason why this view of political change through the internal policy-making of enormous private corporations is unpopular is because it leaves decision-making up to a very small number of people–the elite managers of those corporations. The real disparity of power involved in private corporate governance means that the popular attitude towards that governance is, more often than not, irrelevant. Even less so that political elites, corporate elites are not accountable to a constituency. They are accountable, I suppose, to their shareholders, which have material interests disconnected from political will.

This disconnected shareholder will is one of the main reasons why I’m skeptical about the idea that large corporations and their internal policies are where we should place our hopes for moral leadership. But perhaps what I’m missing is the appropriate intellectual framework for how this will is shaped and what drives these kinds of corporate decisions. I still think TCE might provide insights that I’ve been missing. But I am on the lookout for other sources.

“the privatization of public functions”

An emerging theme from the conference on Trade Secrets and Algorithmic Systems was that legal scholars have become concerned about the privatization of public functions. For example, the use of proprietary risk assessment tools instead of the discretion of judges who are supposed to be publicly accountable is a problem. More generally, use of “trade secrecy” in court settings to prevent inquiry into software systems is bogus and moves more societal control into the realm of private ordering.

Many remedies were proposed. Most involved some kind of disclosure and audit to experts. The most extreme form of disclosure is making the software and, where it’s a matter of public record, training data publicly available.

It is striking to me to be encountering the call for government use of open source systems because…this is not a new issue. The conversation about federal use of open source software was alive and well over five years ago. Then, the arguments were about vendor lock-in; now, they are about accountability of AI. But the essential problem of whether core governing logic should be available to public scrutiny, and the effects of its privatization, have been the same.

If we are concerned with the reliability of a closed and large-scale decision-making process of any kind, we are dealing with problems of credibility, opacity, and complexity. The prospects of an efficient market for these kinds of systems are dim. These market conditions are the conditions of sustainability of open source infrastructure. Failures in sustainability are manifest as software vulnerabilities, which are one of the key reasons why governments are warned against OSS now, though the process of measurement and evaluation of OSS software vulnerability versus proprietary vulnerabilities is methodologically highly fraught.

bodies and liberal publics in the 20th century and today

I finally figured something out, philosophically, that has escaped me for a long time. I feel a little ashamed that it’s taken me so long to get there, since it’s something I’ve been told in one way or another many times before.

Here is the set up: liberalism is justified by universal equivalence between people. This is based in the Enlightenment idea that all people have something in common that makes them part of the same moral order. Recognizing this commonality is an accomplishment of reason and education. Whether this shows up in Habermasian discourse ethics, according to which people may not reason about politics from their personal individual situation, or in the Rawlsian ‘veil of ignorance’, in which moral precepts are intuitively defended under the presumption that one does not know who or where one will be, liberal ideals always require that people leave something out, something that is particular to them. What gets left out is people’s bodies–meaning both their physical characteristics and more broadly their place in lived history. Liberalism was in many ways a challenge to a moral order explicitly based on the body, one that took ancestry and heredity very seriously. So much a part of aristocratic regime was about birthright and, literally, “good breeding”. The bourgeois class, relatively self-made, used liberalism to level the moral playing field with the aristocrats.

The Enlightenment was followed by a period of severe theological and scientific racism that was obsessed with establishing differences between people based on their bodies. Institutions that were internally based on liberalism could then subjugate others, by creating an Other that was outside the moral order. Equivalently, sexism too.
Social Darwinism was a threat to liberalism because it threatened to bring back a much older notion of aristocracy. In WWII, the Nazis rallied behind such an ideology and were defeated in the West by a liberal alliance, which then established the liberal international order.

I’ve got to leave out the Cold War and Communism here for a minute, sorry.

Late modern challenges to the liberal ethos gained prominence in activist circles and the American academy during and following the Civil Rights Movement. These were and continue to be challenges because they were trying to bring bodies back into the conversation. The problem is that a rules-based order that is premised on the erasure of differences in bodies is going to be unable to deal with the political tensions that precisely do come from those bodily differences. Because the moral order of the rules was blind to those differences, the rules did not govern them. For many people, that’s an inadequate circumstance.

So here’s where things get murky for me. In recent years, you have had a tension between the liberal center and the progressive left. The progressive left reasserts the political importance of the body (“Black Lives Matter”), and assertions of liberal commonality (“All Lives Matter”) are first “pushed” to the right, but then bump into white supremacy, which is also a reassertion of the political importance of the body, on the far right. It’s worth mention Piketty, here, I think, because to some extent that also exposed how under liberal regimes the body has secretly been the organizing principle of wealth through the inheritance of private property.

So what has been undone is the sense, necessary for liberalism, that there is something that everybody has in common which is the basis for moral order. Now everybody is talking about their bodily differences.

That is on the one hand good because people do have bodily differences and those differences are definitely important. But it is bad because if everybody is questioning the moral order it’s hard to say that there really is one. We have today, I submit, a political nihilism crisis due to our inability to philosophically imagine a moral order that accounts for bodily difference.

This is about the Internet too!

Under liberalism, you had an idea that a public was a place people could come to agree on the rules. Some people thought that the Internet would become a gigantic public where everybody could get together and discuss the rules. Instead what happened was that the Internet became a place where everybody could discuss each other’s bodies. People with similar bodies could form counterpublics and realize their shared interests as body-classes. (This piece by David Weinberger critiquing the idea of an ‘echo chamber’ is inspiring.) Within these body-based counterpublics each form their own internal moral order whose purpose is to mobilize their body-interests against other kinds of bodies. I’m talking about both black lives matter and white supremacists here, radical feminists and MRA’s. They are all buffeting liberalism with their body interests.

I can’t say whether this is “good” or “bad” because the moral order is in flux. There is apparently no such thing as neutrality in a world of pervasive body agonism. That may be its finest criticism: body agonism is politically unstable. Body agonism leads to body anarchy.

I’ll conclude with two points. The first is that the Enlightenment view of people having something in common (their personhood, their rationality, etc.) which put them in the same moral order was an intellectual and institutional accomplishment. People do not naturally get outside themselves and put themselves in other people’s shoes; they have to be educated to do it. Perhaps there is a kernal of truth here about what moral education is that transcends liberal education. We have to ask whether today’s body agonism is an enlightened state relative to moral liberalism because it acknowledges a previously hidden descriptive reality of body difference and is no longer so naive, or if body agonism is a kind of ethical regress because it undoes moral education, reducing us to a more selfish state of nature, of body conflict, albeit in a world full of institutions based on something else entirely.

The second point is that there is an alternative to liberal order which appears to be alive and well in many places. This is an order that is not based on individual attitudes for legitimacy, but rather is more about the endurance of institutions for their own sake. I’m referring of course to authoritarianism. Without the pretense of individual equality, authoritarian regimes can focus on maintaining power on their own terms. Authoritarian regimes do not need to govern through moral order. U.S. foreign policy used to be based on the idea that such amoral governance would be shunned. But if body agonism has replaced the U.S. international moral order, we no longer have an ideology to export or enforce abroad.

General intelligence, social privilege, and causal inference from factor analysis

I came upon this excellent essay by Cosma Shalizi about how factor analysis has been spuriously used to support the scientific theory of General Intelligence (i.e., IQ). Shalizi, if you don’t know, is one of the best statisticians around. He writes really well and isn’t afraid to point out major blunders in things. He’s one of my favorite academics, and I don’t think I’m alone in this assessment.

First, a motive: Shalizi writes this essay because he thinks the scientific theory of General Intelligence, or a g factor that is some real property of the mind, is wrong. This theory is famous because (a) a lot of people DO believe in IQ as a real feature of the mind, and (b) a significant percentage of these people believe that IQ is hereditary and correlated with race, and (c) the ideas in (b) are used to justify pernicious and unjust social policy. Shalizi, being a principled statistician, appears to take scientific objection to (a) independently of his objection to (c), and argues persuasively that we can reject (a). How?

Shalizi’s point is that the general intelligence factor g is a latent variable that was supposedly discovered using a factor analysis of several different intelligence tests that were supposed to be independent of each other. You can take the data from these data sets and do a dimensionality reduction (that’s what factor analysis is) and get something that looks like a single factor, just as you can take a set of cars and do a dimensionality reduction and get something that looks like a single factor, “size”. The problem is that “intelligence”, just like “size”, can also be a combination of many other factors that are only indirectly associated with each other (height, length, mass, mass of specific components independent of each other, etc.). Once you have many different independent factors combining into one single reduced “dimension” of analysis, you no longer have a coherent causal story of how your general latent variable caused the phenomenon. You have, effectively, correlation without demonstrated causation and, moreover, the correlation is a construct of your data analysis method, and so isn’t really even telling you what correlations normally tell you.

To put it another way: the fact that some people seem to be generally smarter than other people can be due to thousands of independent factors that happen to combine when people apply themselves to different kinds of tasks. If some people were NOT seeming generally smarter than others, that would allow you to reject the hypothesis that there was general intelligence. But the mere presence of the aggregate phenomenon does not prove the existence of a real latent variable. In fact, Shalizi goes on to say, when you do the right kinds of tests to see if there really is a latent factor of ‘general intelligence’, you find that there isn’t any. And so it’s just the persistent and possibly motivated interpretation of the observational data that allows the stubborn myth of general intelligence to continue.

Are you following so far? If you are, it’s likely because you were already skeptical of IQ and its racial correlates to begin with. Now I’m going to switch it up though…

It is fairly common for educated people in the United States (for example) to talk about “privilege” of social groups. White privilege, male privilege–don’t tell me you haven’t at least heard of this stuff before; it is literally everywhere on the center-left news. Privilege here is considered to be a general factor that adheres in certain social groups. It is reinforced by all manner of social conditioning, especially through implicit bias in individual decision-making. This bias is so powerful it extends not to just cases of direct discrimination but also in cases where discrimination happens in a mediated way, for example through technical design. The evidence for these kinds of social privileging effects is obvious: we see inequality everywhere, and we can who is more powerful and benefited by the status quo and who isn’t.

You see where this is going now. I have the momentum. I can’t stop. Here it goes: Maybe this whole story about social privilege is as spuriously supported as the story about general intelligence? What if both narratives were over-interpretations of data that serve a political purpose, but which are not in fact based on sound causal inference techniques?

How could this be? Well, we might gather a lot of data about people: wealth, status, neighborhood, lifespan, etc. And then we could run a dimensionality reduction/factor analysis and get a significant factor that we could name “privilege” or “power”. Potentially that’s a single, real, latent variable. But also potentially it’s hundreds of independent factors spuriously combined into one. It would probably, if I had to bet on it, wind up looking a lot like the factor for “general intelligence”, which plays into the whole controversy about whether and how privilege and intelligence get confused. You must have heard the debates about, say, representation in the technical (or other high-status, high-paying) work force? One side says the smart people get hired; the other side say it’s the privileged (white male) people that get hired. Some jerk suggests that maybe the white males are smarter, and he gets fired. It’s a mess.

I’m offering you a pill right now. It’s not the red pill. It’s not the blue pill. It’s some other colored pill. Green?

There is no such thing as either general intelligence or group based social privilege. Each of these are the results of sloppy data compression over thousands of factors with a loose and subtle correlational structure. The reason why patterns of social behavior that we see are so robust against interventions is that each intervention can work against only one or two of these thousands of factors at a time. Discovering the real causal structure here is hard partly because the effect sizes are very small. Anybody with a simple explanation, especially a politically convenient explanation, is lying to you but also probably lying to themselves. We live in a complex world that resists our understanding and our actions to change it, though it can be better understood and changed through sound statistics. Most people aren’t bothering to do this, and that’s why the world is so dumb right now.