Digifesto

Category: Uncategorized

A few brief notes towards “Procuring Cybersecurity”

I’m shifting research focus a bit and wanted to jot down a few notes. The context for the shift is that I have the pleasure of organizing a roundtable discussion for NYU’s Center for Cybersecurity and Information Law Institute, working closely with Thomas Streinz of NYU’s Guarini Global Law and Tech.

The context for the workshop is the steady feed of news about global technology supply chains and how they are not just relevant to “cybersecurity”, but in some respects are constitutive of cyberinfrastructure and hence the field of its security.

I’m using “global technology supply chains” rather loosely here, but this includes:

  • Transborder personal data flows as used in e-commerce
  • Software- (and Infrastructure-)-as-a-Service being marketing internationally (including Google used abroad, for example)
  • Enterprise software import/export
  • Electronics manufacturing and distribution.

Many concerns about cybersecurity as a global phenomenon circulate around the imagined or actual supply chain. These are sometimes national security concerns that result in real policy, as when Australia recently banned Hauwei and ZTE from supplying 5G network equipment for fear that it would provide a vector of interference from the Chinese government.

But the nationalist framing is certainly not the whole story. I’ve heard anecdotally that after the Snowden revelations, Microsoft’s internally began to see the U.S. government as a cybersecurity “adversary“. Corporate tech vendors naturally don’t want to be known as being vectors for national surveillance, as this cuts down on their global market share.

Governments and corporations have different cybersecurity incentives and threat models. These models intersect and themselves create the dynamic cybersecurity field. For example, these Chinese government has viewed foreign software vendors as cybersecurity threats, and has responded by mandating source code disclosure. But as this is a vector of potential IP theft, foreign vendors have balked, seeing this mandate as a threat. (Ahmed and Weber, 2018).Complicating things further, a defensive “cybersecurity” measure can also serve the goal of protecting domestic technology innovation–which can be framed as providing a nationalist “cybersecurity” edge in the long run.

What, if anything, prevents a total cyberwar of all against all? One answer is trade agreements that level the playing field, or at least establish rules for the game. Another is open technology and standards, which provide an alternative field driven by the benefits of interoperability rather than proprietary interest and secrecy. Is it possible to capture any of this in accurate model or theory?

I love having the opportunity to explore these questions, as they are at the intersection of my empirical work on software supply chains (Benthall et al., 2016; Benthall, 2017) and also theoretical work on data economics in my dissertation. My hunch for some time has been that there’s a dearth of solid economics theory for the contemporary digital economy, and this is one way of getting at that.

References

Ahmed, S., & Weber, S. (2018). China’s long game in techno-nationalism. First Monday, 23(5). 

Benthall, S., Pinney, T., Herz, J. C., Plummer, K., Benthall, S., & Rostrup, S. (2016). An ecological approach to software supply chain risk management. In 15th Python in Science Conference.

Benthall, S. (2017, September). Assessing software supply chain risk using public data. In 2017 IEEE 28th Annual Software Technology Conference (STC) (pp. 1-5). IEEE.

Advertisements

Notes on O’Neil, Chapter 2, “Bomb Parts”

Continuing with O’Neil’s Weapons of Math Destruction on to Chapter 2, “Bomb Parts”. This is a popular book and these are quick chapters. But that’s no reason to underestimate them! This is some of the most lucid work I’ve read on algorithmic fairness.

This chapter talks about three kinds of “models” used in prediction and decision making, with three examples. O’Neil speak highly of the kinds of models used in baseball to predict the trajectory of hits and determine the optimal placement of people in the field. (Ok, I’m not so good at baseball terms). These are good, O’Neil says, because they are transparent, they are consistently adjusted with new data, and the goals are well defined.

O’Neil then very charmingly writes about the model she uses mentally to determine how to feed her family. She juggles a lot of variables: the preferences of her kids, the nutrition and cost of ingredients, and time. This is all hugely relatable–everybody does something like this. Her point, it seems, is that this form of “model” encodes a lot of opinions or “ideology” because it reflects her values.

O’Neil then discusses recidivism prediction, specifically the LSI-R (Level of Service Inventory–Revised) tool. It asks questions like “How many previous convictions have you had?” and uses that to predict likelihood of future prediction. The problem is that (a) this is sensitive to overpolicing in neighborhoods, which has little to do with actual recidivism rates (as opposed to rearrest rates), and (b) e.g. black neighborhoods are more likely to be overpoliced, meaning that the tool, which is not very good at predicting recidivism, has disparate impact. This is an example of what O’Neil calls an (eponymous) weapon of math destruction.(WMD)

She argues that the three qualities of a WMD are Scale, Opacity, and Damage. Which makes sense.

As I’ve said, I think this is a better take on algorithmic ethics than almost anything I’ve read on the subject before. Why?

First, it doesn’t use the word “algorithm” at all. That is huge, because 95% of the time the use of the word “algorithmic” in the technology-and-society literature is stupid. People use “algorithm” when they really mean “software”. Now, they use “AI System” to mean “a company”. It’s ridiculous.

O’Neil makes it clear in this chapter that what she’s talking about are different kinds of models. Models can be in ones head (as in her plan for feeding her family) or in a computer, and both kinds of models can be racist. That’s a helpful, sane view. It’s been the consensus of computer scientists, cognitive scientists, and AI types for decades.

The problem with WMDs, as opposed to other, better models, is that the WMDS models are unhinged from reality. O’Neil’s complaint is not with use of models, but rather that models are being used without being properly trained using sound sampling on data and statistics. WMDs are not artificially intelligences; they are artificial stupidities.

In more technical terms, it seems like the problem with WMDs is not that they don’t properly trade off predictive accuracy with fairness, as some computer science literature would suggest is necessary. It’s that the systems have high error rates in the first place because the training and calibration systems are poorly designed. What’s worse, this avoidable error is disparately distributed, causing more harm to some groups than others.

This is a wonderful and eye-opening account of unfairness in the models used by automated decision-making systems (note the language). Why? Because it shows that there is a connection between statistical bias, the kind of bias that creates distortions in a quantitative predictive process, and social bias, the kind of bias people worry about politically, which consistently uses the term in both ways. If there is statistical bias that is weighing against some social group, then that’s definitely, 100% a form of bias.

Importantly, this kind of bias–statistical bias–is not something that every model must have. Only badly made models have it. It’s something that can be mitigated using scientific rigor and sound design. If we see the problem the way O’Neil sees it, then we can see clearly how better science, applied more rigorously, is also good for social justice.

As a scientist and technologist, it’s been terribly discouraging in the past years to be so consistently confronted with a false dichotomy between sound engineering and justice. At last, here’s a book that clearly outlines how the opposite is the case!

“the politicization of the social” and “politics of identity” in Omi and Winant, Cha. 6

A confusing debate in my corner of the intellectual Internet is about (a) whether the progressive left has a coherent intellectual stance that can be articulated, (b) what to call this stance, (c) whether the right-wing critics of this stance have the intellectual credentials to refer to it and thereby land any kind of rhetorical punch. What may be true is that both “sides” reflect social movements more than they reflect coherent philosophies as such, and so trying to bridge between them intellectually is fruitless.

Happily, reading through Omi and Winant, which among other things outlines a history of what I think of as the progressive left, or the “social justice”, “identity politics” movement in the United States. They address this in their Chapter 6: “The Great Transformation”. They use “the Great Transformation” to refer to “racial upsurges” in the 1950’s and 1960’s.

They are, as far as I can tell, the only people who ever use “The Great Transformation” to refer to this period. I don’t think it is going to stick. They name it this because they see this period as a great victorious period for democracy in the United States. Omi and Winant refer to previous periods in the United States as “racial despotism”, meaning that the state was actively treating nonwhites as second class citizens and preventing them from engaging in democracy in a real way. “Racial democracy”, which would involve true integration across race lines, is an ideal future or political trajectory that was approached during the Great Transformation but not realized fully.

The story of the civil rights movements in the mid-20th century are textbook material and I won’t repeat Omi and Winant’s account, which is interesting for a lot of reasons. One reason why it is interesting is how explicitly influenced by Gramsci their analysis is. As the “despotic” elements of United States power structures fade, the racial order is maintained less by coercion and more by consent. A power disparity in social order maintained by consent is a hegemony, in Gramscian theory.

They explain the Great Transformation as being due to two factors. One was the decline of the ethnicity paradigm of race, which had perhaps naively assumed that racial conflicts could be resolved through assimilation and recognition of ethnic differences without addressing the politically entrenched mechanisms of racial stratification.

The other factor was the rise of new social movements characterized by, in alliance with second-wave feminism, the politicization of the social, whereby social identity and demographic categories were made part of the public political discourse, rather than something private. This is the birth of “politics of identity”, or “identity politics”, for short. These were the original social justice warriors. And they attained some real political victories.

The reason why these social movements are not exactly normalized today is that there was a conservative reaction to resist changes in the 70’s. The way Omi and Winant tell it, the “colorblind ideology” of the early 00’s was culmination of a kind of political truce between “racial despotism” and “racial democracy”–a “racial hegemony”. Gilman has called this “racial liberalism”.

So what does this mean for identity politics today? It means it has its roots in political activism which was once very radical. It really is influenced by Marxism, as these movements were. It means that its co-option by the right is not actually new, as “reverse racism” was one of the inventions of the groups that originally resisted the Civil Rights movement in the 70’s. What’s new is the crisis of hegemony, not the constituent political elements that were its polar extremes, which have been around for decades.

What it also means is that identity politics has been, from its start, a tool for political mobilization. It is not a philosophy of knowledge or about how to live the good life or a world view in a richer sense. It serves a particular instrumental purpose. Omi and Winant talk about the politics of identity is “attractive”, that it is a contagion. These are positive terms for them; they are impressed at how anti-racism spreads. These days I am often referred to Phillips’ report, “The Oxygen of Amplification”, which is about preventing the spread of extremist views by reducing the amount of reporting on them in ‘disgust’. It must be fair to point out that identity politics as a left-wing innovation were at one point an “extremist” view, and that proponents of that view do use media effectively to spread it. This is just how media-based organizing tactics work, now.

From social movements to business standards

Matt Levine has a recent piece discussing how discovering the history of sexual harassment complaints about a company’s leadership is becoming part of standard due diligence before an acquisition. Implicitly, the threat of liability, and presumably the costs of a public relations scandal, are material to the value of the company being acquired.

Perhaps relatedly, the National Venture Capital Association has added to its Model Legal Documents a slew of policies related to harassment and discrimination, codes of conduct, attracting and retaining diverse talent, and family friendly policies. Rumor has it that venture capitalists will now encourage companies they invest in to adopt these tested versions of the policies, much as an organization would adopt a tested and well-understood technical standard.

I have in various researcher roles studied social movements and political change, but these studies have left me with the conclusion that changes to culture are rarely self-propelled, but rather are often due to more fundamental changes in demographics or institutions. State legislation is very slow to move and limited in its range, and so often trails behind other amassing of power and will.

Corporate self-regulation, on the other hand, through standards, contracts, due diligence, and the like, seems to be quite adaptive. This is leading me to the conclusion that a best kept secret of cultural change is that some of the main drivers of it are actually deeply embedded in corporate law. Corporate law has the reputation of being a dry subject which sucks in recent law grads into soulless careers. But what if that wasn’t what corporate law was? What if corporate law was really where the action is?

In broader terms, the adaptivety of corporate policy to changing demographics and social needs perhaps explains the paradox of “progressive neoliberalism”, or the idea that the emerging professional business class seems to be socially liberal, whether or not it is fiscally conservative. Professional culture requires, due to antidiscrimination law and other policies, the compliance of its employees with a standard of ‘political correctness’. People can’t be hostile to each other in the workplace or else they will get fired, and they especially can’t be hostile to anybody on the basis of their being part of a protected category. This has been enshrined into law long ago. Part of the role of educational institutions is to teach students a coherent story about why these rules are what they are and how they are not just legally mandated, but morally compelling. So the professional class has an ideology of inclusivity because it must.

On “Racialization” (Omi and Winant, 2014)

Notes on Omi and Winant, 2014, Chapter 4, Section: “Racialization”.

Summary

Race is often seen as either an objective category, or an illusory one.

Viewed objectively, it is seen as a biological property, tied to phenotypic markers and possibly other genetic traits. It is viewed as an ‘essence’.
Omi and Winant argue that the concept of ‘mixed-race’ depends on this kind of essentialism, as it implies a kind of blending of essences. This is the view associated with “scientific” racism, most prevalent in the prewar era.

View as an illusion, race is seen as an ideological construct. An epiphenomenon of culture, class, or peoplehood. Formed as a kind of “false consciousness”, in the Marxist terminology. This view is associated with certain critics of affirmative action who argue that any racial classification is inherently racist.

Omi and Winant are critical of both perspectives, and argue for an understanding of race as socially real and grounded non-reducibly in phenomic markers but ultimately significant because of the social conflicts and interests constructed around those markers.

They define race as: “a concept that signifies and symbolizes signifiers and symbolizes social conflicts and interests by referring to different types of human bodies.”

The visual aspect of race is irreducible, and becomes significant when, for example, is becomes “understood as a manifestation of more profound differences that are situated within racially identified persons: intelligence, athletic ability, temperament, and sexuality, among other traits.” These “understandings”, which it must be said may be fallacious, “become the basis to justify or reinforce social differentiation.

This process of adding social significance to phenomic markers is, in O&W’s language, racialization, which they define as “the extension of racial meanings to a previously racially unclassified relationship, social practice, or group.” They argue that racialization happens at both macro and micro scales, ranging from the consolidation of the world-system through colonialization to incidents of racial profiling.

Race, then, is a concept that refer to different kinds of bodies by phenotype and the meanings and social practices ascribed to them. When racial concepts are circulated and accepted as ‘social reality’, racial difference is not dependent on visual difference alone, but take on a life of their own.

Omi and Winant therefore take a nuanced view of what it means for a category to be socially constructed, and it is a view that has concrete political implications. They consider the question, raised frequently, as to whether “we” can “get past” race, or go beyond it somehow. (Recall that this edition of the book was written during the Obama administration and is largely a critique of the idea, which seems silly now, that his election made the United States “post-racial”).

Omi and Winant see this framing as unrealistically utopian and based on extreme view that race is “illusory”. It poses race as a problem, a misconception of the past. A more effective position, they claim, would note that race is an element of social structure, not an irregularity in it. “We” cannot naively “get past it”, but also “we” do not need to accept the erroneous conclusion that race is a fixed biological given.

Comments

Omi and Winant’s argument here is mainly one about the ontology of social forms.
In my view, this question of social form ontology is one of the “hard problems”
remaining in philosophy, perhaps equivalent to if not more difficult than the hard problem of consciousness. So no wonder it is such a fraught issue.

The two poles of thinking about race that they present initially, the essentialist view and the epiphenomenal view, had their heyday in particular historical intellectual movements. Proponents of these positions are still popularly active today, though perhaps it’s fair to say that both extremes are now marginalized out of the intellectual mainstream. Despite nobody really understanding how social construction works, most educated people are probably willing to accept that race is socially constructed in one way or another.

It is striking, then, that Omi and Winant’s view of the mechanism of racialization, which involves the reading of ‘deeper meanings’ into phenomic traits, is essentially a throwback to the objective, essentializing viewpoint.
Perhaps there is a kind of cognitive bias, maybe representativeness bias or fundamental attribution bias, which is responsible for the cognitive errors that make racialization possible and persistent.

If so, then the social construction of race would be due as much to the limits of human cognition as to the circulation of concepts. That would explain the temptation to believe that we can ‘get past’ race, because we can always believe in the potential for a society in which people are smarter and are trained out of their basic biases. But Omi and Winant would argue that this is utopian. Perhaps the wisdom of sociology and social science in general is the conservative recognition of the widespread implications of human limitation. As the social expert, one can take the privileged position that notes that social structure is the result of pervasive cognitive error. That pervasive cognitive error is perhaps a more powerful force than the forces developing and propagating social expertise. Whether it is or is not may be the existential question for liberal democracy.

An unanswered question at this point is whether, if race were broadly understood as a function of social structure, it remains as forceful a structuring element as if it is understood as biological essentialism. It is certainly possible that, if understood as socially contingent, the structural power of race will steadily erode through such statistical processes as regression to the mean. In terms of physics, we can ask whether the current state of the human race(s) is at equilibrium, or heading towards an equilibrium, or diverging in a chaotic and path-dependent way. In any of these cases, there is possibly a role to be played by technical infrastructure. In other words, there are many very substantive and difficult social scientific questions at the root of the question of whether and how technical infrastructure plays a role in the social reproduction of race.

interesting article about business in China

I don’t know much about China, really, so I’m always fascinated to learn more.

This FT article, “Anbang arrests demonstrates hostility to business”, by Jamil Anderlini, provides some wonderful historical context to a story about the arrest of an insurance oligarch.

In ancient times, merchants were at the very bottom of the four official social classes, below warrior-scholars, farmers and artisans. Although some became very rich they were considered parasites in Chinese society.

Ever since the Han emperors established the state salt monopoly in the second century BCE (remnants of which remain to this day), large-scale business enterprises have been controlled by the state or completely reliant on the favour of the emperor and the bureaucrat class.

In the 20th century, the Communist emperor Mao Zedong effectively managed to stamp out all private enterprise for a while.

Until the party finally allowed “capitalists” to join its ranks in 2002, many of the business activities carried out by the resurgent merchant class were technically illegal.

China’s rich lists are populated by entrepreneurs operating in just a handful of industries — particularly real estate and the internet.

Tycoons like Mr Wu who emerge in state-dominated sectors are still exceedingly rare. They are almost always closely linked to one of the old revolutionary families exercising enormous power from the shadows.

Everything about this is interesting.

First, in Western scholarship we rarely give China credit for its history of bureaucracy in the absence of capitalism. In the well know Weberian account, bureaucracy is an institutional invention that provides regular rule of law so that capitalism can thrive. But China’s history is one that is statist “from ancient times”, but with effective bureaucracy from the beginning. A managerialist history, perhaps.

Which makes the second point so unusual: why, given this long history of bureaucratic rule, are Internet companies operating in a comparatively unregulated way? This seems like a massive concession of power, not unlike how (arguably) the government of the United States conceded a lot of power to Silicon Valley under the Obama administration.

The article dramatically foreshadows a potential power struggle between Xi Jinping’s consolidated state and the tech giant oligarchs:

Now that Chinese President Xi Jinping has abolished his own term limits, setting the stage for him to rule for life if he wants to, the system of state patronage and the punishment of independent oligarchs is likely to expand. Any company or billionaire who offends the emperor or his minions will be swiftly dealt with in the same way as Mr Wu.

There is one group of Chinese companies with charismatic — some would say arrogant — founders that enjoy immense economic power in China today. They would seem to be prime candidates if the assault on private enterprise is stepped up.

Internet giants Alibaba, Tencent and Baidu are not only hugely profitable, they control the data that is the lifeblood of the modern economy. That is why Alibaba founder Jack Ma has repeatedly said, including to the FT, that he would gladly hand his company over to the state if Beijing ever asked him to. Investors in BABA can only hope it never comes to that.

That is quite the expression of feudal fealty from Jack Ma. Truly, a totally different business culture from that of the United States.

Exit vs. Voice as Defecting vs. Cooperation as …

These dichotomies that are often thought of separately are actually the same.

Cooperation Defection
Voice (Hirschman) Exit (Hirschman)
Lifeworld (Habermas) System (Habermas)
Power (Arendt) Violence (Arendt)
Institutions Markets

Marcuse, de Beauvoir, and Badiou: reflections on three strategies

I have written in this blog about three different philosophers who articulated a vision of hope for a more free world, including in their account an understanding of the role of technology. I would like to compare these views because nuanced differences between them may be important.

First, let’s talk about Marcuse, a Frankfurt School thinker whose work was an effective expression of philosophical Marxism that catalyzed the New Left. Marcuse was, like other Frankfurt School thinkers, concerned about the role of technology in society. His proposed remedy was “the transcendent project“, which involves an attempt at advancing “the totality” through an understanding of its logic and action to transform it into something that is better, more free.

As I began to discuss here, there is a problem with this kind of Marxist aspiration for a transformation of all of society through philosophical understanding, which is this: the political and technical totality exists as it does in no small part to manage its own internal information flows. Information asymmetries and differentiation of control structures are a feature, not a bug. The convulsions caused by the Internet as it tears and repairs the social fabric have not created the conditions of unified enlightened understanding. Rather, they have exposed that given nearly boundless access to information, most people will ignore it and maintain, against all evidence to the contrary, the dignity of one who has a valid opinion.

The Internet makes a mockery of expertise, and makes no exception for the expertise necessary for the Marcusian “transcendental project”. Expertise may be replaced with the technological apparati of artificial intelligence and mass data collection, but the latter are a form of capital whose distribution is a part of the totality. If they are having their transcendent effect today, as the proponents of AI claim, this effect is in the hands of a very few. Their motivations are inscrutable. As they have their own opinions and courtiers, writing for them is futile. They are, properly speaking, a great uncertainty that shows that centralized control does not close down all options. It may be that the next defining moment in history is set by the decision of how Jeff Bezos decides to spend his wealth, and that is his decision alone. For “our” purposes–yours, my reader, and mine–this arbitrariness of power must be seen as part of the totality to be transcended, if that is possible.

It probably isn’t. And if it Really isn’t, that may be the best argument for something like the postmodern breakdown of all epistemes. There are at least two strands of postmodern thought coming from the denial of traditional knowledge and university structure. The first is the phenomenological privileging of subjective experience. This approach has the advantage of never being embarrassed by the fact that the Internet is constantly exposing us as fools. Rather, it allows us to narcissistically and uncritically indulge in whatever bubble we find ourselves in. The alternative approach is to explicitly theorize about ones finitude and the radical implications of it, to embrace a kind of realist skepticism or at least acknowledgement of the limitations of the human condition.

It’s this latter approach which was taken up by the existentialists in the mid-20th century. In particular, I keep returning to de Beauvoir as a hopeful voice that recognizes a role for science that is not totalizing, but nevertheless liberatory. De Beauvoir does not take aim, like Marcuse and the Frankfurt School, at societal transformation. Her concern is with individual transformation, which is, given the radical uncertainty of society, a far more tractable problem. Individual ethics are based in local effects, not grand political outcomes. The desirable local effects are personal liberation and liberation of those one comes in contact with. Science, and other activities, is a way of opening new possibilities, not limited to what is instrumental for control.

Such a view of incremental, local, individual empowerment and goodness seems naive in the face of pessimistic views of society’s corruptedness. Whether these be economic or sociological theories of how inequality and oppression are locked into society, and however emotionally compelling and widespread they may be in social media, it is necessary by our previous argument to remember that these views are always mere ideology, not scientific fact, because an accurate totalizing view of society is impossible given real constraints on information flow and use. Totalizing ideologies that are not rigorous in their acceptance of basic realistic points are a symptom of more complex social structure (i.e. the distribution of capitals, the reproduction of many habiti) not a definition of it.

It is consistent for a scientific attitude to deflate political ideology because this deflation is an opening of possibility against both utopian and dystopian trajectories. What’s missing is a scientific proof of this very point, comparable to a Halting Problem or Incompleteness Theorem, but for social understanding.

A last comment, comparing Badiou to de Beauvoir and Marcuse. Badiou’s theory of the Event as the moment that may be seized to effect a transformation is perhaps a synthesis of existentialist and Marxian philosophies. Badiou is still concerned with transcendence, i.e. the moment when, given one assumed structure to life or reality or psychology, one discovers an opening into a renewed life with possibilities that the old model did not allow. But (at least as far as I have read him, which is not enough) he sees the Event as something that comes from without. It cannot be predicted or anticipate within the system but is instead a kind of grace. Without breaking explicitly from professional secularism, Badiou’s work suggests that we must have faith in something outside our understanding to provide an opportunity for transcendence. This is opposed to the more muscular theories described above: Marcuse’s theory of transcendent political activism and de Beauvoir’s active individual projects are not as patient.

I am still young and strong and so prefer the existentialist position on these matters. I am politically engaged to some extent and so, as an extension of my projects of individual freedom, am in search of opportunities for political transcendence as well–a kind of Marcuse light, as politics like science is a field of contest that is reproduced as its games are played and this is its structure. But life has taught me again and again to appreciate Badiou’s point as well, which is the appreciation of the unforeseen opportunity, the scientific and political anomaly.

What does this reflection conclude?

First, it acknowledges the situatedness and fragility of expertise, which deflates grand hopes for transcendent political projects. Pessimistic ideologies that characterize the totality as beyond redemption are false; indeed it is characteristic of the totality that it is incomprehensible. This is a realistic view, and transcendence must take it seriously.

Second, it acknowledges the validity of more localized liberatory projects despite the first point.

Third, it acknowledges that the unexpected event is a feature of the totality to be embraced, contrary to pessimistic ideologies to the contrary. The latter, far from encouraging transcendence, are blinders that prevent the recognition of events.

Because realism requires that we not abandon core logical principles despite our empirical uncertainty, you may permit one more deduction. To the extent that actors in society pursue the de Beauvoiran strategy of engaging in local liberatory projects that affect others, the probability of a Badiousian event in the life of another increases. Solipsism is false, and so (to put it tritely) “random acts of kindness” do have their effect on the totality, in aggregate. In fact, there may be no more radical political agenda than this opening up of spaces of local freedom, which shrugs off the depression of pessimistic ideology and suppression of technical control. Which is not a new view at all. What is perhaps surprising is how easy it may be.

Values in design and mathematical impossibility

Under pressure from the public and no doubt with sincere interest in the topic, computer scientists have taken up the difficulty task of translating commonly held values into the mathematical forms that can be used for technical design. Commonly, what these researches discover is some form of mathematical impossibility of achieving a number of desirable goals at the same time. This work has demonstrated the impossibility of having a classifier that is fair with respect to a social category without data about that very category (Dwork et al., 2012), having a fair classifier that is both statistically well calibrated for the prediction of properties of persons and equalizing the false positive and false negative rates of partitions of that population (Kleinberg et al., 2016), of preserving privacy of individuals after an arbitrary number of queries to a database, however obscured (Dwork, 2008), or of a coherent notion of proxy variable use in privacy and fairness applications that is based on program semantics (as opposed to syntax) (Datta et al., 2017).

These are important results. An important thing about them is that they transcend the narrow discipline in which they originated. As mathematical theorems, they will be true whether or not they are implemented on machines or in human behavior. Therefore, these theorems have a role comparable to other core mathematical theorems in social science, such as Arrow’s Impossibility Theorem (Arrow, 1950), a theorem about the impossibility of having a voting system with reasonable desiderata for determining social welfare.

There can be no question of the significance of this kind of work. It was significant a hundred years ago. It is perhaps of even more immediate, practical importance when so much public infrastructure is computational. For what computation is is automation of mathematics, full stop.

There are some scholars, even some ethicists, for whom this is an unwelcome idea. I have been recently told by one ethics professor that to try to mathematize core concepts in ethics is to commit a “category mistake”. This is refuted by the clearly productive attempts to do this, some of which I’ve cited above. This belief that scientists and mathematicians are on a different plane than ethicists is quite old: Hannah Arendt argued that scientists should not be trusted because their mathematical language prevented them from engaging in normal political and ethical discourse (Arendt, 1959). But once again, this recent literature (as well as much older literature in such fields as theoretical economics) demonstrates that this view is incorrect.

There are many possible explanations for the persistence of the view that mathematics and the hard sciences do not concern themselves with ethics, are somehow lacking in ethical education, or that engineers require non-technical people to tell them how to engineer things more ethically.

One reason is that the sciences are much broader in scope than the ethical results mentioned here. It is indeed possible to get a specialist’s education in a technical field without much ethical training, even in the mathematical ethics results mentioned above.

Another reason is that whereas understanding the mathematical tradeoffs inherent in certain kinds of design is an important part of ethics, it can be argued by others that what’s most important about ethics is some substantive commitment that cannot be mathematically defended. For example, suppose half the population believes that it is most ethical for members of the other half to treat them with special dignity and consideration, at the expense of the other half. It may be difficult to arrive at this conclusion from mathematics alone, but this group may advocate for special treatment out of ethical consideration nonetheless.

These two reasons are similar. The first states that mathematics includes many things that are not ethics. The second states that ethics potentially (and certainly in the minds of some people) includes much that is not mathematical.

I want to bring up a third reason, which is perhaps more profound than the other two, which is this: what distinguishes mathematics as a field is its commitment to logical non-contradiction, which means that it is able to baldly claim when goals are impossible to achieve. Acknowledging tradeoffs is part of what mathematicians and scientists do.

Acknowledging tradeoffs is not something that everybody else is trained to do, and indeed many philosophers are apparently motivated by the ability to surpass limitations. Alain Badiou, who is one of the living philosophers that I find to be most inspiring and correct, maintains that mathematics is the science of pure Being, of all possibilities. Reality is just a subset of these possibilities, and much of Badiou’s philosophy is dedicated to the Event, those points where the logical constraints of our current worldview are defeated and new possibilities open up.

This is inspirational work, but it contradicts what many mathematicians do in fact, which is identify impossibility. Science forecloses possibilities where a poet may see infinite potential.

Other ethicists, especially existentialist ethicists, see the limitation and expansion of possibility, especially in the possibility of personal accomplishment, as fundamental to ethics. This work is inspiring precisely because it states so clearly what it is we hope for and aspire to.

What mathematical ethics often tells us is that these hopes are fruitless. The desiderata cannot be met. Somebody will always get the short stick. Engineers, unable to triumph against mathematics, will always disappoint somebody, and whoever that somebody is can always argue that the engineers have neglected ethics, and demand justice.

There may be good reasons for making everybody believe that they are qualified to comment on the subject of ethics. Indeed, in a sense everybody is required to act ethically even when they are not ethicists. But the preceding argument suggests that perhaps mathematical education is an essential part of ethical education, because without it one can have unrealistic expectations of the ethics of others. This is a scary thought because mathematics education is so often so poor. We live today, as we have lived before, in a culture with great mathophobia (Papert, 1980) and this mathophobia is perpetuated by those who try to equate mathematical training with immorality.

References

Arendt, Hannah. The human condition:[a study of the central dilemmas facing modern man]. Doubleday, 1959.

Arrow, Kenneth J. “A difficulty in the concept of social welfare.” Journal of political economy 58.4 (1950): 328-346.

Benthall, Sebastian. “Philosophy of computational social science.” Cosmos and History: The Journal of Natural and Social Philosophy 12.2 (2016): 13-30.

Datta, Anupam, et al. “Use Privacy in Data-Driven Systems: Theory and Experiments with Machine Learnt Programs.” arXiv preprint arXiv:1705.07807 (2017).

Dwork, Cynthia. “Differential privacy: A survey of results.” International Conference on Theory and Applications of Models of Computation. Springer, Berlin, Heidelberg, 2008.

Dwork, Cynthia, et al. “Fairness through awareness.” Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. ACM, 2012.

Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair determination of risk scores.” arXiv preprint arXiv:1609.05807 (2016).

Papert, Seymour. Mindstorms: Children, computers, and powerful ideas. Basic Books, Inc., 1980.

Pondering “use privacy”

I’ve been working carefully with Datta et al.’s “Use Privacy” work (link), which makes a clear case for how a programmatic, data-driven model may be statically analyzed for its use of a proxy of a protected variable, and repaired.

Their system has a number of interesting characteristics, among which are:

  • The use of a normative oracle for determining which proxy uses are prohibited.
  • A proof that there is no coherent definition of proxy use which has all of a set of very reasonable properties defined over function semantics.

Given (2), they continue with a compelling study of how a syntactic definition of proxy use, one based on the explicit contents of a function, can support a system of detecting and repairing proxies.

My question is to what extent the sources of normative restriction on proxies (those characterized by the oracle in (1)) are likely to favor syntactic proxy use restrictions, as opposed to semantic ones. Since ethicists and lawyers, who are the purported sources of these normative restrictions, are likely to consider any technical system a black box for the purpose of their evaluation, they will naturally be concerned with program semantics. It may be comforting for those responsible for a technical program to be able to, in a sense, avoid liability by assuring that their programs are not using a restricted proxy. But, truly, so what? Since these syntactic considerations do not make any semantic guarantees, will they really plausibly address normative concerns?

A striking result from their analysis which has perhaps broader implications is the incoherence of a semantic notion of proxy use. Perhaps sadly but also substantively, this result shows that a certain plausible normative is impossible for a system to fulfill in general. Only restricted conditions make such a thing possible. This seems to be part of a pattern in these rigorous computer science evaluations of ethical problems; see also Kleinberg et al. (2016) on how it’s impossible to meet several plausible definitions of “fairness” in the risk-assessment scores across social groups except under certain conditions.

The conclusion for me is that what this nobly motivated computer science work reveals is that what people are actually interested in normatively is not the functioning of any particular computational system. They are rather interested in social conditions more broadly, which are rarely aligned with our normative ideals. Computational systems, by making realities harshly concrete, are disappointing, but it’s a mistake to make that a disappointment with the computing systems themselves. Rather, there are mathematical facts that are disappointing regardless of what sorts of systems mediate our social world.

This is not merely a philosophical consideration or sociological observation. Since the the interpretation of laws are part of the process of informing normative expectations (as in a normative oracle), it is an interesting an perhaps open question how lawyers and judges, in their task of legal interpretation, make use of the mathematical conclusions about normative tradeoffs being offered up by computer scientists.

References

Datta, Anupam, et al. “Use Privacy in Data-Driven Systems: Theory and Experiments with Machine Learnt Programs.” arXiv preprint arXiv:1705.07807 (2017).

Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair determination of risk scores.” arXiv preprint arXiv:1609.05807 (2016).