Digifesto

Contextual Integrity as a field

There was a nice small gathering of nearby researchers (and one important call-in) working on Contextual Integrity at Princeton’s CITP today. It was a nice opportunity to share what we’ve been working on and make plans for the future.

There was a really nice range of different contributions: systems engineering for privacy policy enforcement, empirical survey work testing contextualized privacy expectations, a proposal for a participatory design approach to identifying privacy norms in marginalized communities, a qualitative study on how children understand privacy, and an analysis of the privacy implications of the Cybersecurity Information Sharing Act, among other work.

What was great is that everybody was on the same page about what we were after: getting a better understanding of what privacy really is, so that we can design between policies, educational tools, and technologies that preserve it. For one reason or another, the people in the room had been attracted to Contextual Integrity. Many of us have reservations about the theory in one way or another, but we all see its value and potential.

One note of consensus was that we should try to organize a workshop dedicated specifically to Contextual Integrity, and widening what we accomplished today to bring in more researchers. Today’s meeting was a convenience sample, leaving out a lot of important perspectives.

Another interesting thing that happened today was a general acknowledgment that Contextual Integrity is not a static framework. As a theory, it is subject to change as scholars critique and contribute to it through their empirical and theoretical work. A few of us are excited about the possibility of a Contextual Integrity 2.0, extending the original theory to fill theoretical gaps that have been identified in it.

I’d articulate the aspiration of the meeting today as being about letting Contextual Integrity grow from being a framework into a field–a community of people working together to cultivate something, in this case, a kind of knowledge.

Appearance, deed, and thing: meta-theory of the politics of technology

Flammarion engraving

Much is written today about the political and social consequences of technology. This writing often maintains that this inquiry into politics and society is distinct from the scientific understanding that informs the technology itself. This essay argues that this distinction is an error. Truly, there is only one science of technology and its politics.

Appearance, deed, and thing

There are worthwhile distinctions made between how our experience of the world feels to us directly (appearance), how we can best act strategically in the world (deed), and how the world is “in itself” or, in a sense, despite ourselves (individually) (thing).

Appearance

The world as we experience it has been given the name “phenomenon” (late Latin from Greek phainomenon ‘thing appearing to view’) and so “phenomenology” is the study of what we colloquially call today our “lived experience”. Some anthropological methods are a kind of social phenomenology, and some scholars will deny that there is anything beyond phenomenology. Those that claim to have a more effective strategy or truer picture of the world may have rhetorical power, powers that work on the lived experience of the more oppressed people because they have not been adequately debunked and shown to be situated, relativized. The solution to social and political problems, to these scholars, is more phenomenology.*

Deed

There are others that see things differently. A perhaps more normal attitude is that the outcomes of ones actions are more important that how the world feels. Things can feel one way now and another way tomorrow; does it much matter? If one holds some beliefs that don’t work when practically applied, one can correct oneself. The name for this philosophical attitude is pragmatism, (from Greek pragma, ‘deed’). There are many people, including some scholars, who find this approach entirely sufficient. The solution to social and political problems is more pragmatism. Sometimes this involves writing off impractical ideas and the people who hold them either useless or as mere pawns. It is their loss.

Thing

There are others that see things still differently. A perhaps diminishing portion of the population holds theories of how the world works that transcend both their own lived experience and individual practical applications. Scientific theories about the physical nature of the universe, though tested pragmatically and through the phenomena apparent to the scientists, are based in a higher claim about their value. As Bourdieu (2004) argues, the whole field of science depends on the accepted condition that scientists fairly contend for a “monopoly on the arbitration of the real”. Scientific theories are tested through contest, with a deliberate effort by all parties to prove their theory to be the greatest. These conditions of contest hold science to a more demanding standard than pragmatism, as results of applying a pragmatic attitude will depend on the local conditions of action. Scientific theories are, in principle, accountable to the real (from late Latin realis, from Latin res ‘thing’); these scientists may
be called ‘realists’ in general, though there are many flavors of realism as, appropriately, theories of what is real and how to discover reality have come and gone (see post-positivism and critical realism, for example).

Realists may or may not be concerned with social and political problems. Realists may ask: What is a social problem? What do solutions to these problems look like?

By this account, these three foci and their corresponding methodological approaches are not equivalent to each other. Phenomenology concerns itself with documenting the multiplicity of appearances. Pragmatism introduces something over and above this: a sorting or evaluation of appearances based on some goals or desired outcomes. Realism introduces something over and above pragmatism: an attempt at objectivity based on the contest of different theories across a wide range of goals. ‘Disinterested’ inquiry, or equivalently inquiry that is maximally inclusive of all interests, further refines the evaluation of which appearances are valid.

If this account sounds disparaging of phenomenology as merely a part of higher and more advanced forms of inquiry, that is truly how it is intended. However, it is equally notable that to live up to its own standard of disinterestedness, realism must include phenomenology fully within itself.

Nature and technology

It would be delightful if we could live forever in a world of appearances that takes the shape that we desire of it when we reason about it critically enough. But this is not how any but the luckiest live.

Rather, the world acts on us in ways that we do not anticipate. Things appear to us unbidden; they are born, and sometimes this is called ‘nature’ (from Latin natura ‘birth, nature, quality,’ from nat- ‘born’). The first snow of Winter comes as a surprise after a long warm Autumn. We did nothing to summon it, it was always there. For thousands of years humanity has worked to master nature through pragmatic deeds and realistic science. Now, very little of nature has been untouched by human hands. The stars are still things in themselves. Our planetary world is one we have made.

“Technology” (from Greek tekhnologia ‘systematic treatment,’ from tekhnē ‘art, craft’) is what we call those things that are made by skillful human deed. A glance out the window into a city, or at the device one uses to read this blog post, is all one needs to confirm that the world is full of technology. Sitting in the interior of an apartment now, literally everything in my field of vision except perhaps my own two hands and the potted plant are technological artifacts.

Science and technology studies: political appearances

According to one narrative, Winner (1980) famously asked the galling question “Do artifacts have politics?” and spawned a field of study** that questions the social consequences of technology. Science and Technology Studies (STS) is, purportedly, this field.
The insight this field claims as their own is that technology has social impact that is politically interesting, the specifics of this design determine these impacts, and that the social context of the design therefore influences the consequences of the technology. At its most ambitious, STS attempts to take the specifics of the technology out of the explanatory loop, showing instead how politics drives design and implementation to further political ends.

Anthropological methods are popular among STS scholars, who often commit themselves to revealing appearances that demonstrate the political origins and impacts of technology. The STS researcher might asked, rhetorically, “Did you know that this interactive console is designed and used for surveillance?”

We can nod sagely at these observations. Indeed, things appear to people in myriad ways, and critical analysis of those appearances does expose that there is a multiplicity of ways of looking at things. But what does one do with this picture?

The pragmatic turn back to realism

When one starts to ask the pragmatic question “What is to be done?”, one leaves the domain of mere appearances and begins to question the consequences of one’s deeds. This leads one to take actions and observe the unanticipated results. Suddenly, one is engaging in experimentation, and new kinds of knowledge are necessary. One needs to study organizational theory to understand the role of h technology within a firm, economics to understand how it interacts with the economy. One quickly leaves the field of study known as “science and technology studies” as soon as one begins to consider ones practical effects.

Worse (!), the pragmatist quickly discovers that discovering the impact of ones deeds requires an analysis of probabilities and the difficulty techniques of sampling data and correcting for bias. These techniques have been proven through the vigorous contest of the realists, and the pragmatist discovers that many tools–technologies–have been invented and provisioned for them to make it easier to use these robust strategies. The pragmatist begins to use, without understanding them, all the fruits of science. Their successes are alienated from their narrow lived experience, which are not enough to account for the miracles the= world–one others have invented for them–performs for them every day.

The pragmatist must draw the following conclusions. The world is full of technology, is constituted by it. The world is also full of politics. Indeed, the world is both politics and technology; politics is a technology; technology is form of politics. The world that must be mastered, for pragmatic purposes, is this politico-technical*** world.

What is technical about the world is that it is a world of things created through deed. These things manifest themselves in appearances in myriad and often unpredictable ways.

What is political about the world is that it is a contest of interests. To the most naive student, it may be a shock that technology is part of this contest of interests, but truly this is the most extreme naivete. What adolescent is not exposed to some form of arms race, whether it be in sports equipment, cosmetics, transportation, recreation, etc. What adult does not encounter the reality of technology’s role in their own business or home, and the choice of what to procure and use.

The pragmatist must be struck by the sheer obviousness of the observation that artifacts “have” politics, though they must also acknowledge that “things” are different from the deeds that create them and the appearances they create. There are, after all, many mistakes in design. The effects of technology may as often be due to incompetence as they are to political intent. And to determine the difference, one must contest the designer of the technology on their own terms, in the engineering discourse that has attempted to prove which qualities of a thing survive scrutiny across all interests. The pragmatist engaging the politico-technical world has to ask: “What is real?”

The real thing

“What is real?” This is the scientific question. It has been asked again and again for thousands of years for reasons not unlike those traced in this essay. The scientific struggle is the political struggle for mastery over our own politico-technical world, over the reality that is being constantly reinvented as things through human deeds.

There are no short cuts to answering this question. There are only many ways to cop out. These steps take one backward into striving for ones local interest or, further, into mere appearance, with its potential for indulgence and delusion. This is the darkness of ignorance. Forward, far ahead, is a horizon, an opening, a strange new light.

* This narrow view of the ‘privilege of subjectivity’ is perhaps a cause of recent confusion over free speech on college campuses. Realism, as proposed in this essay, is a possible alternative to that.

** It has been claimed that this field of study does not exist, much to the annoyance of those working within it.

*** I believe this term is no uglier than the now commonly used “sociotechnical”.

References

Bourdieu, Pierre. Science of science and reflexivity. Polity, 2004.

Winner, Langdon. “Do artifacts have politics?.” Daedalus (1980): 121-136.

managerialism, continued

I’ve begun preliminary skimmings of Enteman’s Managerialism. It is a dense work of analytic philosophy, thick with argument. Sporadic summaries may not do it justice. That said, the principle of this blog is that the bar for ‘publication’ is low.

According to its introduction, Enteman’s Managerialism is written by a philosophy professor (Willard Enteman) who kept finding that the “great thinkers”–Adam Smith, Karl Marx–and the theories espoused in their writing kept getting debunked by his students. Contemporary examples showed that, contrary to conventional wisdom, the United States was not a capitalist country whose only alternative was socialism. In his observation, the United States in 1993 was neither strictly speaking capitalist, nor was it socialist. There was a theoretical gap that needed to be filled.

One of the concepts reintroduced by Enteman is Robert Dahl‘s concept of polyarchy, or “rule by many”. A polyarchy is neither a dictatorship nor a democracy, but rather is a form of government where many different people with different interests, but then again probably not everybody, is in charge. It represents some necessary but probably insufficient conditions for democracy.

This view of power seems evidently correct in most political units within the United States. Now I am wondering if I should be reading Dahl instead of Enteman. It appears that Dahl was mainly offering this political theory in contrast to a view that posited that political power was mainly held by a single dominant elite. In a polyarchy, power is held by many different kinds of elites in contest with each other. At its democratic best, these elites are responsive to citizen interests in a pluralistic way, and this works out despite the inability of most people to participate in government.

I certainly recommend the Wikipedia articles linked above. I find I’m sympathetic to this view, having come around to something like it myself but through the perhaps unlikely path of Bourdieu.

This still limits the discussion of political power in terms of the powers of particular people. Managerialism, if I’m reading it right, makes the case that individual power is not atomic but is due to organizational power. This makes sense; we can look at powerful individuals having an influence on government, but a more useful lens could look to powerful companies and civil society organizations, because these shape the incentives of the powerful people within them.

I should make a shift I’ve made just now explicit. When we talk about democracy, we are often talking about a formal government, like a sovereign nation or municipal government. But when we talk about powerful organizations in society, we are no longer just talking about elected officials and their appointees. We are talking about several different classes of organizations–businesses, civil society organizations, and governments among them–interacting with each other.

It may be that that’s all there is to it. Maybe Capitalism is an ideology that argues for more power to businesses, Socialism is an ideology that argues for more power to formal government, and Democracy is an ideology that argues for more power to civil society institutions. These are zero-sum ideologies. Managerialism would be a theory that acknowledges the tussle between these sectors at the organizational level, as opposed to at the atomic individual level.

The reason why this is a relevant perspective to engage with today is that there has probably in recent years been a transfer of power (I might say ‘control’) from government to corporations–especially Big Tech (Google, Amazon, Facebook, Apple). Frank Pasquale makes the argument for this in a recent piece. He writes and speaks with a particular policy agenda that is far better researched than this blog post. But a good deal of the work is framed around the surprise that ‘governance’ might shift to a private company in the first place. This is a framing that will always be striking to those who are invested in the politics of the state; the very word “govern” is unmarkedly used for formal government and then surprising when used to refer to something else.

Managerialism, then, may be a way of pointing to an option where more power is held by non-state actors. Crucially, though, managerialism is not the same thing as neoliberalism, because neoliberalism is based on laissez-faire market ideology and contempory information infrastructure oligopolies look nothing like laissez-faire markets! Calling the transfer of power from government to corporation today neoliberalism is quite anachronistic and misleading, really!

Perhaps managerialism, like polyarchy, is a descriptive term of a set of political conditions that does not represent an ideal, but a reality with potential to become an ideal. In that case, it’s worth investigating managerialism more carefully and determining what it is and isn’t, and why it is on the rise.

beginning Enteman’s Managerialism

I’ve been writing about managerialism without having done my homework.

Today I got a new book in the mail, Willard Enteman’s Managerialism: The Emergence of a New Ideology, a work of analytic political philosophy that came out in 1993. The gist of the book is that none of the dominant world ideologies of the time–capitalism, socialism, and democracy–actually describe the world as it functions.

Enter Enteman’s managerialism, which considers a society composed of organizations, not individuals, and social decisions as a consequence of the decisions of organizational managers.

It’s striking that this political theory has been around for so long, though it is perhaps more relevant today because of large digital platforms.

How to promote employees using machine learning without societal bias

Though it may at first read as being callous, a managerialist stance on inequality in statistical classification can help untangle some of the rhetoric around this tricky issue.

Consider the example that’s been in the news lately:

Suppose a company begins to use an algorithm to make decisions about which employees to promote. It uses a classifier trained on past data about who has been promoted. Because of societal bias, women are systematically under-promoted; this is reflected in the data set. The algorithm, naively trained on the historical data, reproduces the historical bias.

This example describes a bad situation. It is bad from a social justice perspective; by assumption, it would be better if men and women had equal opportunity in this work place.

It is also bad from a managerialist perspective. Why? Because if the point of using an algorithm were not to correct for societal biases introducing irrelevancies into the promotion decision, then it would not make managerial sense to change business practices over to using an algorithm. The whole point of using an algorithm is to improve on human decision-making. This is a poor match of an algorithm to a problem.

Unfortunately, what makes this example compelling is precisely what makes it a bad example of using an algorithm in this context. The only variables discussed in the example are the socially salient ones thick with political implications: gender, and promotion. What are more universal concerns than gender relations and socioeconomic status?!

But from a managerialist perspective, promotions should be issued based on a number of factors not mentioned in the example. What factors are these? That’s a great and difficult question. Promotions can reward hard work and loyalty. They can also be issued to those who demonstrate capacity for leadership, which can be a function of how well they get along with other members of the organization. There may be a number of features that predict these desirable qualities, most of which will have to do with working conditions within the company as opposed to qualities inherent in the employee (such as their past education, or their gender).

If one were to start to use machine learning intelligently to solve this problem, then one would go about solving it in a way entirely unlike the procedure in the problematic example. One would rather draw on soundly sourced domain expertise to develop a model of the relationship between relevant, work-related factors. For many of the key parts of the model, such as general relationships between personality type, leadership style, and cooperation with colleagues, one would look outside the organization for gold standard data that was sampled responsibly.

Once the organization has this model, then it can apply it to its own employees. For this to work, employees would need to provide significant detail about themselves, and the company would need to provide contextual information about the conditions under which employees work, as these may be confounding factors.

Part of the merit of building and fitting such a model would be that, because it is based on a lot of new and objective scientific considerations, it would produce novel results in recommending promotions. Again, if the algorithm merely reproduced past results, it would not be worth the investment in building the model.

When the algorithm is introduced, it ideally is used in a way that maintains traditional promotion processes in parallel so that the two kinds of results can be compared. Evaluation of the algorithm’s performance, relative to traditional methods, is a long, arduous process full of potential insights. Using the algorithm as an intervention at first allows the company to develop a causal understanding its impact. Insights from the evaluation can be factored back into the algorithm, improving the latter.

In all these cases, the company must keep its business goals firmly in mind. If they do this, then the rest of the logic of their method falls out of data science best practices, which are grounded in mathematical principles of statistics. While the political implications of poorly managed machine learning are troubling, effective management of machine learning which takes the precautions necessary to develop objectivity is ultimately a corrective to social bias. This is a case where sounds science and managerialist motives and social justice are aligned.

Enlightening economics reads

Nils Gilman argues that the future of the world is wide open because neoliberalism has been discredited. So what’s the future going to look like?

Given that neoliberalism is for the most part an economic vision, and that competing theories have often also been economic visions (when they have not been political or theological theories), a compelling futurist approach is to look out for new thinking about economics. The three articles below have recently taught me something new about economics:

Dani Rodrik. “Rescuing Economics from Neoliberalism”, Boston Review. (link)

This article makes the case that the association frequently made between economics as a social science and neoliberalism as an ideology is overdrawn. Of course, probably the majority of economists are not neoliberals. Rodrik is defending a view of economics that keeps its options open. I think he overstates the point with the claim, “Good economists know that the correct answer to any question in economics is: it depends.” This is just simply incorrect, if questions have their assumptions bracketed well enough. But since Rodrik’s rhetorical point appears to be that economists should not be dogmatists, he can be forgiven this overstatement.

As an aside, there is something compelling but also dangerous to the view that a social science can provide at best narrowly tailored insights into specific phenomena. These kinds of ‘sciences’ wind up being unaccountable, because the specificity of particular events prevent the repeated testing of the theories that are used to explain them. There is a risk of too much nuance, which is akin to the statistical concept of overfitting.

A different kind of article is:

Seth Ackerman. “The Disruptors” Jacobin. (link)

An interview with J.W. Mason in the smart socialist magazine, Jacobin, that had the honor of a shout out from Matt Levine’s popular “Money Talk” Bloomberg column (column?). On of the interesting topics it raises is whether or not mutual funds, in which many people invest in a fund that then owns a wide portfolio of stocks, are in a sense socialist and anti-competitive because shareholders no longer have an interest in seeing competition in the market.

This is original thinking, and the endorsement by Levine is an indication that it’s not a crazy thing to consider even for the seasoned practical economists in the financial sector. My hunch at this point in life is that if you want to understand the economy, you have to understand finance, because they are the ones whose job it is to profit from their understanding of the economy. As a corollary, I don’t really understand the economy because I don’t have a great grasp of the financial sector. Maybe one day that will change.

Speaking of expertise being enhanced by having ‘skin in the game’, the third article is:

Nassim Nicholas Taleb. “Inequality and Skin in the Game,” Medium. (link)

I haven’t read a lot of Taleb though I acknowledge he’s a noteworthy an important thinker. This article confirmed for me the reputation of his style. It was also a strikingly fresh look at economics of inequality, capturing a few of the important things mainstream opinion overlooks about inequality, namely:

  • Comparing people at different life stages is a mistake when analyzing inequality of a population.
  • A lot of the cause of inequality is randomness (as opposed to fixed population categories), and this inequality is inevitable

He’s got a theory of what kinds of inequality people resent versus what they tolerate, which is a fine theory. It would be nice to see some empirical validation of it. He writes about the relationship between ergodicity and inequality, which is interesting. He is scornful of Piketty and everyone who was impressed by Piketty’s argument, which comes off as unfriendly.

Much of what Taleb writes about the need to understand the economy through a richer understanding of probability and statistics strikes me as correct. If it is indeed the case that mainstream economics has not caught up to this, there is an opportunity here!

mathematical discourse vs. exit; blockchain applications

Continuing my effort to tie together the work on this blog into a single theory, I should address the theme of an old post that I’d forgotten about.

The post discusses the discourse theory of law, attributed to the later, matured Habermas. According to it, the law serves as a transmission belt between legitimate norms established by civil society and a system of power, money, and technology. When it is efficacious and legitimate, society prospers in its legitimacy. The blog post toys with the idea of normatively aligned algorithm law established in a similar way: through the norms established by civil society.

I wrote about this in 2014 and I’m surprised to find myself revisiting these themes in my work today on privacy by design.

What this requires, however, is that civil society must be able to engage in mathematical discourse, or mathematized discussion of norms. In other words, there has to be an intersection of civil society and science for this to make sense. I’m reminded by how inspired I’ve felt by Nick Doty’s work on multistakerholderism in Internet standards as a model.

I am more skeptical of this model than I have been before, if only because in the short term I’m unsure if a critical mass of scientific talent can engage with civil society well enough to change the law. This is because scientific talent is a form of capital which has no clear incentive for self-regulation. Relatedly, I’m no longer as confident that civil society carries enough clout to change policy. I must consider other options.

The other option, besides voicing ones concerns in civil society, is, of course, exit, in Hirschmann‘s sense. Theoretically an autonomous algorithmic law could be designed such that it encourages exit from other systems into itself. Or, more ecologically, competing autonomous (or decentralized, …) systems can be regulated by an exit mechanism. This is in fact what happens now with blockchain technology and cryptocurrency. Whenever there is a major failure of one of these currencies, there is a fork.

Recap

Sometimes traffic on this blog draws attention to an old post from years ago. This can be a reminder that I’ve been repeating myself, encountering the same themes over and over again. This is not necessarily a bad thing, because I hope to one day compile the ideas from this blog into a book. It’s nice to see what points keep resurfacing.

One of these points is that liberalism assumes equality, but this challenged by society’s need for control structures, which creates inequality, which then undermines liberalism. This post calls in Charles Taylor (writing about Hegel!) to make the point. This post makes the point more succinctly. I’ve been drawing on Beniger for the ‘society needs control to manage its own integration’ thesis. I’ve pointed to the term managerialism as referring to an alternative to liberalism based on the acknowledgement of this need for control structures. Managerialism looks a lot like liberalism, it turns out, but it justifies things on different grounds and does not get so confused. As an alternative, more Bourdieusian view of the problem, I consider the relationship between capital, democracy, and oligarchy here. There are some useful names for what happens when managerialism goes wrong and people seem disconnected from each other–anomie–or from the control structures–alienation.

A related point I’ve made repeatedly is the tension between procedural legitimacy and getting people the substantive results that they want. That post about Hegel goes into this. But it comes up again in very recent work on antidiscrimination law and machine learning. What this amounts to is that attempts to come up with a fair, legitimate procedure are going to divide up the “pie” of resources, or be perceived to divide up the pie of resources, somehow, and people are going to be upset about it, however the pie is sliced.

A related theme that comes up frequently is mathematics. My contention is that effective control is a technical accomplishment that is mathematically optimized and constrained. There are mathematical results that reveal necessary trade-offs between values. Data science has been misunderstood as positivism when in fact it is a means of power. Technical knowledge and technology are forms of capital (Bourdieu again). Perhaps precisely because it is a rare form of capital, science is politically distrusted.

To put it succinctly: lack of mathematics education, due to lack of opportunity or mathophobia, lead to alienation and anomie in an economy of control. This is partly reflected in the chaotic disciplinarity of the social sciences, especially as they react to computational social science, at the intersection of social sciences, statistics, and computer science.

Lest this all seem like an argument for the mathematical certitude of totalitarianism, I have elsewhere considered and rejected this possibility of ‘instrumentality run amok‘. I’ve summarized these arguments here, though this appears to have left a number of people unconvinced. I’ve argued this further, and think there’s more to this story (a formalization of Scott’s arguments from Seeing Like a State, perhaps), but I must admit I don’t have a convincing solution to the “control problem” yet. However, it must be noted that the answer to the control problem is an empirical or scientific prediction, not a political inclination. Whether or not it is the most interesting or important question regarding technological control has been debated to a stalemate, as far as I can tell.

As I don’t believe singleton control is a likely or interesting scenario, I’m more interested in practical ways of offering legitimacy or resistance to control structures. I used to think the “right” political solution was a kind of “hacker class consciousness“; I don’t believe this any more. However, I still think there’s a lot to the idea of recursive publics as actually existing alternative power structures. Platform coops are interesting for the same reason.

All this leads me to admit my interest in the disruptive technology du jour, the blockchain.

Values in design and mathematical impossibility

Under pressure from the public and no doubt with sincere interest in the topic, computer scientists have taken up the difficulty task of translating commonly held values into the mathematical forms that can be used for technical design. Commonly, what these researches discover is some form of mathematical impossibility of achieving a number of desirable goals at the same time. This work has demonstrated the impossibility of having a classifier that is fair with respect to a social category without data about that very category (Dwork et al., 2012), having a fair classifier that is both statistically well calibrated for the prediction of properties of persons and equalizing the false positive and false negative rates of partitions of that population (Kleinberg et al., 2016), of preserving privacy of individuals after an arbitrary number of queries to a database, however obscured (Dwork, 2008), or of a coherent notion of proxy variable use in privacy and fairness applications that is based on program semantics (as opposed to syntax) (Datta et al., 2017).

These are important results. An important thing about them is that they transcend the narrow discipline in which they originated. As mathematical theorems, they will be true whether or not they are implemented on machines or in human behavior. Therefore, these theorems have a role comparable to other core mathematical theorems in social science, such as Arrow’s Impossibility Theorem (Arrow, 1950), a theorem about the impossibility of having a voting system with reasonable desiderata for determining social welfare.

There can be no question of the significance of this kind of work. It was significant a hundred years ago. It is perhaps of even more immediate, practical importance when so much public infrastructure is computational. For what computation is is automation of mathematics, full stop.

There are some scholars, even some ethicists, for whom this is an unwelcome idea. I have been recently told by one ethics professor that to try to mathematize core concepts in ethics is to commit a “category mistake”. This is refuted by the clearly productive attempts to do this, some of which I’ve cited above. This belief that scientists and mathematicians are on a different plane than ethicists is quite old: Hannah Arendt argued that scientists should not be trusted because their mathematical language prevented them from engaging in normal political and ethical discourse (Arendt, 1959). But once again, this recent literature (as well as much older literature in such fields as theoretical economics) demonstrates that this view is incorrect.

There are many possible explanations for the persistence of the view that mathematics and the hard sciences do not concern themselves with ethics, are somehow lacking in ethical education, or that engineers require non-technical people to tell them how to engineer things more ethically.

One reason is that the sciences are much broader in scope than the ethical results mentioned here. It is indeed possible to get a specialist’s education in a technical field without much ethical training, even in the mathematical ethics results mentioned above.

Another reason is that whereas understanding the mathematical tradeoffs inherent in certain kinds of design is an important part of ethics, it can be argued by others that what’s most important about ethics is some substantive commitment that cannot be mathematically defended. For example, suppose half the population believes that it is most ethical for members of the other half to treat them with special dignity and consideration, at the expense of the other half. It may be difficult to arrive at this conclusion from mathematics alone, but this group may advocate for special treatment out of ethical consideration nonetheless.

These two reasons are similar. The first states that mathematics includes many things that are not ethics. The second states that ethics potentially (and certainly in the minds of some people) includes much that is not mathematical.

I want to bring up a third reason, which is perhaps more profound than the other two, which is this: what distinguishes mathematics as a field is its commitment to logical non-contradiction, which means that it is able to baldly claim when goals are impossible to achieve. Acknowledging tradeoffs is part of what mathematicians and scientists do.

Acknowledging tradeoffs is not something that everybody else is trained to do, and indeed many philosophers are apparently motivated by the ability to surpass limitations. Alain Badiou, who is one of the living philosophers that I find to be most inspiring and correct, maintains that mathematics is the science of pure Being, of all possibilities. Reality is just a subset of these possibilities, and much of Badiou’s philosophy is dedicated to the Event, those points where the logical constraints of our current worldview are defeated and new possibilities open up.

This is inspirational work, but it contradicts what many mathematicians do in fact, which is identify impossibility. Science forecloses possibilities where a poet may see infinite potential.

Other ethicists, especially existentialist ethicists, see the limitation and expansion of possibility, especially in the possibility of personal accomplishment, as fundamental to ethics. This work is inspiring precisely because it states so clearly what it is we hope for and aspire to.

What mathematical ethics often tells us is that these hopes are fruitless. The desiderata cannot be met. Somebody will always get the short stick. Engineers, unable to triumph against mathematics, will always disappoint somebody, and whoever that somebody is can always argue that the engineers have neglected ethics, and demand justice.

There may be good reasons for making everybody believe that they are qualified to comment on the subject of ethics. Indeed, in a sense everybody is required to act ethically even when they are not ethicists. But the preceding argument suggests that perhaps mathematical education is an essential part of ethical education, because without it one can have unrealistic expectations of the ethics of others. This is a scary thought because mathematics education is so often so poor. We live today, as we have lived before, in a culture with great mathophobia (Papert, 1980) and this mathophobia is perpetuated by those who try to equate mathematical training with immorality.

References

Arendt, Hannah. The human condition:[a study of the central dilemmas facing modern man]. Doubleday, 1959.

Arrow, Kenneth J. “A difficulty in the concept of social welfare.” Journal of political economy 58.4 (1950): 328-346.

Benthall, Sebastian. “Philosophy of computational social science.” Cosmos and History: The Journal of Natural and Social Philosophy 12.2 (2016): 13-30.

Datta, Anupam, et al. “Use Privacy in Data-Driven Systems: Theory and Experiments with Machine Learnt Programs.” arXiv preprint arXiv:1705.07807 (2017).

Dwork, Cynthia. “Differential privacy: A survey of results.” International Conference on Theory and Applications of Models of Computation. Springer, Berlin, Heidelberg, 2008.

Dwork, Cynthia, et al. “Fairness through awareness.” Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. ACM, 2012.

Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair determination of risk scores.” arXiv preprint arXiv:1609.05807 (2016).

Papert, Seymour. Mindstorms: Children, computers, and powerful ideas. Basic Books, Inc., 1980.

Pondering “use privacy”

I’ve been working carefully with Datta et al.’s “Use Privacy” work (link), which makes a clear case for how a programmatic, data-driven model may be statically analyzed for its use of a proxy of a protected variable, and repaired.

Their system has a number of interesting characteristics, among which are:

  • The use of a normative oracle for determining which proxy uses are prohibited.
  • A proof that there is no coherent definition of proxy use which has all of a set of very reasonable properties defined over function semantics.

Given (2), they continue with a compelling study of how a syntactic definition of proxy use, one based on the explicit contents of a function, can support a system of detecting and repairing proxies.

My question is to what extent the sources of normative restriction on proxies (those characterized by the oracle in (1)) are likely to favor syntactic proxy use restrictions, as opposed to semantic ones. Since ethicists and lawyers, who are the purported sources of these normative restrictions, are likely to consider any technical system a black box for the purpose of their evaluation, they will naturally be concerned with program semantics. It may be comforting for those responsible for a technical program to be able to, in a sense, avoid liability by assuring that their programs are not using a restricted proxy. But, truly, so what? Since these syntactic considerations do not make any semantic guarantees, will they really plausibly address normative concerns?

A striking result from their analysis which has perhaps broader implications is the incoherence of a semantic notion of proxy use. Perhaps sadly but also substantively, this result shows that a certain plausible normative is impossible for a system to fulfill in general. Only restricted conditions make such a thing possible. This seems to be part of a pattern in these rigorous computer science evaluations of ethical problems; see also Kleinberg et al. (2016) on how it’s impossible to meet several plausible definitions of “fairness” in the risk-assessment scores across social groups except under certain conditions.

The conclusion for me is that what this nobly motivated computer science work reveals is that what people are actually interested in normatively is not the functioning of any particular computational system. They are rather interested in social conditions more broadly, which are rarely aligned with our normative ideals. Computational systems, by making realities harshly concrete, are disappointing, but it’s a mistake to make that a disappointment with the computing systems themselves. Rather, there are mathematical facts that are disappointing regardless of what sorts of systems mediate our social world.

This is not merely a philosophical consideration or sociological observation. Since the the interpretation of laws are part of the process of informing normative expectations (as in a normative oracle), it is an interesting an perhaps open question how lawyers and judges, in their task of legal interpretation, make use of the mathematical conclusions about normative tradeoffs being offered up by computer scientists.

References

Datta, Anupam, et al. “Use Privacy in Data-Driven Systems: Theory and Experiments with Machine Learnt Programs.” arXiv preprint arXiv:1705.07807 (2017).

Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair determination of risk scores.” arXiv preprint arXiv:1609.05807 (2016).