Digifesto

Pondering “use privacy”

I’ve been working carefully with Datta et al.’s “Use Privacy” work (link), which makes a clear case for how a programmatic, data-driven model may be statically analyzed for its use of a proxy of a protected variable, and repaired.

Their system has a number of interesting characteristics, among which are:

  • The use of a normative oracle for determining which proxy uses are prohibited.
  • A proof that there is no coherent definition of proxy use which has all of a set of very reasonable properties defined over function semantics.

Given (2), they continue with a compelling study of how a syntactic definition of proxy use, one based on the explicit contents of a function, can support a system of detecting and repairing proxies.

My question is to what extent the sources of normative restriction on proxies (those characterized by the oracle in (1)) are likely to favor syntactic proxy use restrictions, as opposed to semantic ones. Since ethicists and lawyers, who are the purported sources of these normative restrictions, are likely to consider any technical system a black box for the purpose of their evaluation, they will naturally be concerned with program semantics. It may be comforting for those responsible for a technical program to be able to, in a sense, avoid liability by assuring that their programs are not using a restricted proxy. But, truly, so what? Since these syntactic considerations do not make any semantic guarantees, will they really plausibly address normative concerns?

A striking result from their analysis which has perhaps broader implications is the incoherence of a semantic notion of proxy use. Perhaps sadly but also substantively, this result shows that a certain plausible normative is impossible for a system to fulfill in general. Only restricted conditions make such a thing possible. This seems to be part of a pattern in these rigorous computer science evaluations of ethical problems; see also Kleinberg et al. (2016) on how it’s impossible to meet several plausible definitions of “fairness” in the risk-assessment scores across social groups except under certain conditions.

The conclusion for me is that what this nobly motivated computer science work reveals is that what people are actually interested in normatively is not the functioning of any particular computational system. They are rather interested in social conditions more broadly, which are rarely aligned with our normative ideals. Computational systems, by making realities harshly concrete, are disappointing, but it’s a mistake to make that a disappointment with the computing systems themselves. Rather, there are mathematical facts that are disappointing regardless of what sorts of systems mediate our social world.

This is not merely a philosophical consideration or sociological observation. Since the the interpretation of laws are part of the process of informing normative expectations (as in a normative oracle), it is an interesting an perhaps open question how lawyers and judges, in their task of legal interpretation, make use of the mathematical conclusions about normative tradeoffs being offered up by computer scientists.

References

Datta, Anupam, et al. “Use Privacy in Data-Driven Systems: Theory and Experiments with Machine Learnt Programs.” arXiv preprint arXiv:1705.07807 (2017).

Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair determination of risk scores.” arXiv preprint arXiv:1609.05807 (2016).

Advertisements

On achieving social equality

When evaluating a system, we have a choice of evaluating its internal functions–the inside view–or evaluating its effects situated in a larger context–the outside view.

Decision procedures (whether they are embodied by people or performed in concert with mechanical devices–I don’t think this distinction matters here) for sorting people are just such a system. If I understand correctly, the question of which principles animate antidiscrimination law hinge on this difference between the inside and outside view.

We can look at a decision-making process and evaluate whether as a procedure it achieves its goals of e.g. assigning credit scores without bias against certain groups. Even including processes of the gathering of evidence or data in such a system, it can in principle be bounded and evaluated by its ability to perform its goals. We do seem to care about the difference between procedural discrimination and procedural nondiscrimination. For example, an overtly racist policy that ignores truly talent and opportunity seems worse than a bureaucratic system that is indifferent to external inequality between groups that then gets reflected in decisions made according to other factors that are merely correlated with race.

The latter case has been criticized in the outside view. The criticism is captured by the phrasing that “algorithms can reproduce existing biases”. The supposedly neutral algorithm (which can, again, be either human or machine) is not neutral in its impact because in making its considerations of e.g. business interest are indifferent to the conditions outside it. The business is attracted to wealth and opportunity, which are held disproportionately by some part of the population, so the business is attracted to that population.

There is great wisdom in recognizing that institutions that are neutral in their inside view will often reproduce bias in the outside view. But it is incorrect to therefore conflate neutrality in the inside view with a biased inside view, even though their effects may be under some circumstances the same. When I say it is “incorrect”, I mean that they are in fact different because, for example, if the external conditions of procedurally neutral institution change, then it will reflect those new conditions. A procedurally biased institution will not reflect those new conditions in the same way.

Empirically it is very hard to tell when an institution is being procedurally neutral and indeed this is the crux of an enormous amount of political tension today. The first line of defense of an institution blamed of bias is to claim that their procedural neutrality is merely reflecting environmental conditions outside of its control. This is unconvincing for many politically active people. It seems to me that it is now much more common for institutions to avoid this problem by explicitly declaring their bias. Rather than try to accomplish the seemingly impossible task of defending their rigorous neutrality, it’s easier to declare where one stands on the issue of resource allocation globally and adjust ones procedure accordingly.

I don’t think this is a good thing.

One consequence of evaluating all institutions based on their global, “systemic” impact as opposed to their procedural neutrality is that it hollows out the political center. The evidence is in that politics has become more and more polarized. This is inevitable if politics becomes so explicitly about maintaining or reallocating resources as opposed to about building neutrally legitimate institutions. When one party in Congress considers a tax bill which seems designed mainly to enrich ones own constituencies at the expense of the other’s things have gotten out of hand. The idea of a unified idea of ‘good government’ has been all but abandoned.

An alternative is a commitment to procedural neutrality in the inside view of institutions, or at least some institutions. The fact that there are many different institutions that may have different policies is indeed quite relevant here. For while it is commonplace to say that a neutral institution will “reproduce existing biases”, “reproduction” is not a particularly helpful word here. Neither is “bias”. What we can say more precisely is that the operations of procedurally neutral institution will not change the distribution of resources even though they are unequal.

But if we do not hold all institutions accountable for correcting the inequality of society, isn’t that the same thing as approving of the status quo, which is so unequal? A thousand times no.

First, there’s the problem that many institutions are not, currently, procedurally neutral. Procedural neutrality is a higher standard than what many institutions are currently held to. Consider what is widely known about human beings and their implicit biases. One good argument for transferring decision-making authority to machine learning algorithms, even standard ones not augmented for ‘fairness’, is that they will not have the same implicit, inside, biases as the humans that currently make these decisions.

Second, there’s the fact that responsibility for correcting social inequality can be taken on by some institutions that are dedicated to this task while others are procedurally neutral. For example, one can consistently believe in the importance of a progressive social safety net combined with procedurally neutral credit reporting. Society is complex and perhaps rightly has many different functioning parts; not all the parts have to reflect socially progressive values for the arc of history to bend towards justice.

Third, there is reason to believe that even if all institutions were procedurally neutral, there would eventually be social equality. This has to do with the mathematically bulletproof but often ignored phenomenon of regression towards the mean. When values are sampled from a process at random, their average will approach the mean of the distribution as more values are accumulated. In terms of the allocation of resources in a population, there is some random variation in the way resources flow. When institutions are fair, inequality in resource allocation will settle into an unbiased distribution. While their may continue to be some apparent inequality due to disorganized heavy tail effects, these will not be biased, in a political sense.

Fourth, there is the problem of political backlash. Whenever political institutions are weak enough to be modified towards what is purported to be a ‘substantive’ or outside view neutrality, that will always be because some political coalition has attained enough power to swing the pendulum in their favor. The more explicit they are about doing this, the more it will mobilize the enemies of this coallition to try to swing the pendulum back the other way. The result is war by other means, the outcome of which will never be fair, because in war there are many who wind up dead or injured.

I am arguing for a centrist position on these matters, one that favors procedural neutrality in most institutions. This is not because I don’t care about substantive, “outside view” inequality. On the contrary, it’s because I believe that partisan bickering that explicitly undermines the inside neutrality of institutions undermines substantive equality. Partisan bickering over the scraps within narrow institutional frames is a distraction from, for example, the way the most wealthy avoid taxes while the middle class pays even more. There is a reason why political propaganda that induces partisan divisions is a weapon. Agreement about procedural neutrality is a core part of civic unity that allows for collective action against the very most abusively powerful.

References

Zachary C. Lipton, Alexandra Chouldechova, Julian McAuley. “Does mitigating ML’s disparate impact require disparate treatment?” 2017

Notes on fairness and nondiscrimination in machine learning

There has been a lot of work done lately on “fairness in machine learning” and related topics. It cannot be a coincidence that this work has paralleled a rise in political intolerance that is sensitized to issues of gender, race, citizenship, and so on. I more or less stand by my initial reaction to this line of work. But very recently I’ve done a deeper and more responsible dive into this literature and it’s proven to be insightful beyond the narrow problems which it purports to solve. These are some notes on the subject, ordered so as to get to the point.

The subject of whether and to what extent computer systems can enact morally objectionable bias goes back at least as far as Friedman and Nissenbaum’s 1996 article, in which they define “bias” as systematic unfairness. They mean this very generally, not specifically in a political sense (though inclusive of it). Twenty years later, Kleinberg et al. (2016) prove that there are multiple, competing notions of fairness in machine classification which generally cannot be satisfied all at once; they must be traded off against each other. In particular, a classifier that uses all available information to optimize accuracy–one that achieves what these authors call calibration–cannot also have equal false positive and false negative rates across population groups (read: race, sex), properties that Hardt et al. (2016) call “equal opportunity”. This is no doubt inspired by a now very famous ProPublica article asserting that a particular kind of commercial recidivism prediction software was “biased against blacks” because it had a higher false positive rate for black suspects than white offenders. Because bail and parole rates are set according to predicted recidivism, this led to cases where a non-recidivist was denied bail because they were black, which sounds unfair to a lot of people, including myself.

While I understand that there is a lot of high quality and well-intentioned research on this subject, I haven’t found anybody who could tell me why the solution to this problem was to stop using predicted recidivism to set bail, as opposed to futzing around with a recidivism prediction algorithm which seems to have been doing its job (Dieterich et al., 2016). Recidivism rates are actually correlated with race (Hartney and Vuong, 2009). This is probably because of centuries of systematic racism. If you are serious about remediating historical inequality, the least you could do is cut black people some slack on bail.

This gets to what for me is the most baffling aspect of this whole research agenda, one that I didn’t have the words for before reading Barocas and Selbst (2016). A point well-made by them is that the interpretation anti-discrimination law, which motivates a lot of this research, is fraught with tensions that complicate its application to data mining.

“Two competing principles have always undergirded anti-discrimination law: nondiscrimination and antisubordination. Nondiscrimination is the narrower of the two, holding that the responsibility of the law is to eliminate the unfairness individuals experience a the hands of decisionmakers’ choices due to membership in certain protected classes. Antisubordination theory, in contrast, holds that the goal of antidiscrimination law is, or at least should be, to eliminate status-based inequality due to membership in those classes, not as a matter of procedure, but substance.” (Barocas and Selbst, 2016)

More specifically, these two principles motivate different interpretations of the two pillars of anti-discrimination law, disparate treatment and disparate impact. I draw on Barocas and Selbst for my understanding of each:

A judgment of disparate treatment requires either a formal disparate treatment (across protected groups) of similarly situated people, or an intent to discriminate. Since in a large data mining application protected group membership will be proxied by many other factors, it’s not clear if the ‘formal’ requirement makes much sense here. And since machine learning applications only very rarely have racist intent, that option seems challengeable as well. While there are interpretations of these criteria that are tougher on decision-makers (i.e. unconscious intents), these seem to be motivated by antisubordination rather than the weaker nondiscrimination principle.

A judgment of disparate impact is perhaps more straightforward, but it can be mitigated in cases of “business necessity”, which (to get to the point) is vague enough to plausibly include optimization in a technical sense. Once again, there is nothing to see here from a nondiscrimination standpoint, though a nonsubordinationist would rather that these decision-makers have to take correcting for historical inequality into account.

I infer from their writing that Barocas and Selbst believe that nonsubordination is an important principle for nondiscrimination. In any case, they maintain that making the case for applying nondiscrimination laws to data mining effectively requires a commitment to “substantive remediation”. This is insightful!

Just to put my cards on the table: as much as I may like the idea of substantive remediation in principle, I personally don’t think that every application of nondiscrimination law needs to be animated by it. For many institutions, narrow nondiscrimination seems to be adequate if not preferable. I’d prefer remediation to occur through other specific policies, such as more public investment in schools in low-income districts. Perhaps for this reason, I’m not crazy about “fairness in machine learning” as a general technical practice. It seems to me to be trying to solve social problems with a technical fix, which despite being quite technical myself I don’t always see as a good idea. It seems like in most cases you could have a machine learning mechanism based on normal statistical principles (the learning step) and then use a decision procedure separately that achieves your political ends.

I wish that this research community (and here I mean more the qualitative research community surrounding it more than the technical community, which tends to define its terms carefully) would be more careful about the ways it talks about “bias”, because often it seems to encourage a conflation between statistical or technical senses of bias and political senses. The latter carry so much political baggage that it can be intimidating to try to wade in and untangle the two senses. And it’s important to do this untangling, because while bad statistical bias can lead to political bias, it can, depending on the circumstances, lead to either “good” or “bad” political bias. But it’s important, from the sake of numeracy (mathematical literacy) to understand that even if a statistically bad process has a politically “good” outcome, that is still, statistically speaking, bad.

My sense is that there are interpretations of nondiscrimination law that make it illegal to make certain judgments taking into account certain facts about sensitive properties like race and sex. There are also theorems showing that if you don’t take into account those sensitive properties, you are going to discriminate against them by accident because those sensitive variables are correlated with anything else you would use to judge people. As a general principle, while being ignorant may sometimes make things better when you are extremely lucky, in general it makes things worse! This should be a surprise to nobody.

References

Barocas, Solon, and Andrew D. Selbst. “Big data’s disparate impact.” (2016).

Dieterich, William, Christina Mendoza, and Tim Brennan. “COMPAS risk scales: Demonstrating accuracy equity and predictive parity.” Northpoint Inc (2016).

Friedman, Batya, and Helen Nissenbaum. “Bias in computer systems.” ACM Transactions on Information Systems (TOIS) 14.3 (1996): 330-347.

Hardt, Moritz, Eric Price, and Nati Srebro. “Equality of opportunity in supervised learning.” Advances in Neural Information Processing Systems. 2016.

Hartney, Christopher, and Linh Vuong. “Created equal: Racial and ethnic disparities in the US criminal justice system.” (2009).

Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair determination of risk scores.” arXiv preprint arXiv:1609.05807 (2016).

Personal data property rights as privacy solution. Re: Cofone, 2017

I’m working my way through Ignacio Cofone’s “The Dynamic Effect of Information Privacy Law” (2017) (link), which is an economic analysis of privacy. Without doing justice to the full scope of the article, it must be said that it is a thorough discussion of previous information economics literature and a good case for property rights over personal data. In a nutshell, one can say that markets are good for efficient and socially desirable resource allocation, but they are only good at this when there are well crafted property rights to the goods involved. Personal data, like intellectual property, is a tricky case because of the idiosyncrasies of data–its has zero-ish marginal cost, it seems to get more valuable when it’s aggregated, etc. But like intellectual property, we should expect under normal economic rationality assumptions that the more we protect the property rights of those who create personal data, the more they will be incentivized to create it.

I am very warm to this kind of argument because I feel there’s been a dearth of good information economics in my own education, though I have been looking for it! I do believe there are economic laws and that they are relevant for public policy, let alone business strategy.

I have concerns about Cofone’s argument specifically, which are these:

First, I have my doubts that seeing data as a good in any classical economic sense is going to work. Ontologically, data is just too weird for a lot of earlier modeling methods. I have been working on a different way of modeling information flow economics that tries to capture how much of what we’re concerned with are information services, not information goods.

My other concern is that Cofone’s argument gives users/data subjects credit for being rational agents, capable of addressing the risks of privacy and acting accordingly. Hoofnagle and Urban (2014) show that this is empirically not the case. In fact, if you take the average person who is not that concerned about their privacy on-line and start telling them facts about how their data is being used by third-parties, etc., they start to freak out and get a lot more worried about privacy.

This throws a wrench in the argument that stronger personal data property rights would lead to more personal data creation, therefore (I guess it’s implied) more economic growth. People seem willing to create personal data and give it away, despite actual adverse economic incentives, because cat videos are just so damn appealing. Or something. It may generally be the case that economic modeling is used by information businesses but not information policy people because average users are just so unable to act rationally; it really is a domain better suited to behavioral economics and usability research.

I’m still holding out though. Just because big data subjects are not homo economicus doesn’t mean that an economic analysis of their activity is pointless. It just means we need to have a more sophisticated economic model, on that takes into account how there are many different classes of user that are differently informed. This kind of economic modeling, and empirically fitting it to data, is within our reach. We have the technology.

References

Cofone, Ignacio N. “The Dynamic Effect of Information Privacy Law.” Minn. JL Sci. & Tech. 18 (2017): 517.

Hoofnagle, Chris Jay, and Jennifer M. Urban. “Alan Westin’s privacy homo economicus.” (2014).

Why managerialism: it acknowledges political role of internal corporate policies

One modern difficulty with political theory in contemporary times is the confusion between government and corporate policy. This is due in no small part to the extent to which large corporations now mediate social life. Telecommunications, the Internet, mobile phones, and social media all depend on layers and layers of operating organizations. The search engine, which didn’t exist thirty years ago, now is arguably an essential cultural and political facility (Pasquale, 2011), which sharpens the concerns that have been raised about their politics (Introna and Nissenbaum, 2000; Bracha and Pasquale, 2007).

Corporate policies influence customers when those policies drive product design or are put into contractual agreements. They can also govern employees and shape corporate culture. Sometimes these two kinds of policies are not easily demarcated. For example, Uber has an internal privacy policy about who can access which users’ information, like most companies with a lot of user data. The privacy features that Uber implicitly guarantees to their customers are part of their service. But their ability to provide this service is only as good as their company culture is reliable.

Classically, there are states, which may or may not be corrupt, and there are markets, which may or may not be competitive. With competitive markets, corporate policies are part of what make firms succeed or fail. One point of success is a company’s ability to attract and maintain customers. This should in principle drive companies to improve their policies.

An interesting point made recently by Robert Post is that in some cases, corporate policies can adopt positions that would be endorsed by some legal scholars even if the actual laws state otherwise. His particular example was a case enforcing the right to be forgotten in Spain against Google.

Since European law is statute driven, the judgments of its courts are not amenable to creative legal reasoning as they are in the United States. Post’s criticism of the EU’s judgment in this case is because of their rigid interpetation of data protection directives. Post argues a different legal perspective on privacy is better at balancing other social interests. But putting aside the particulars of the law, Post makes the point that Google’s internal policy matches his own legal and philosophical framework (which prefers dignitary privacy over data privacy) more than EU statutes do.

One could argue that we should not trust the market to make Google’s policies just. But we could also argue that Google’s market share, which is significant, depends so much on its reputation and users trust that in fact it is under great pressure to adjucate disputes with its users wisely. It is a company that must set its own policies, which do have political significance. It has the benefits of more direct control over the way these policies get interpreted and enforced in the state, faster feedback on whether the policies are successful, and a less chaotic legislative process for establishing policy in the first place.

Political liberals would dismiss this kind of corporate control as just one commercial service among many, or else wring their hands with concern over a company coming to have such power over the public sphere. But managerialists would see the emergence of search engines as an organization among others, comparable to other private entities that have been part of the public sphere, such as newspapers.

But a sound analysis of the politics of search engines need not depend on analogies with past technologies. This is a function of legal reasoning. Managerialism, which is perhaps more a descendent of business reasoning, would ask how, in fact, search engines make policy decisions and how does this affect political outcomes. It does not prima facie assume that a powerful or important corporate policy is wrong. It does ask what the best corporate policy is, given a particular sector.

References

Bracha, Oren, and Frank Pasquale. “Federal Search Commission-Access, Fairness, and Accountability in the Law of Search.” Cornell L. Rev. 93 (2007): 1149.

Introna, Lucas D., and Helen Nissenbaum. “Shaping the Web: Why the politics of search engines matters.” The information society 16.3 (2000): 169-185.

Pasquale, Frank A. “Dominant search engines: an essential cultural & political facility.” (2011).

Why managerialism: it’s tolerant and meritocratic

In my last post, I argued that we should take managerialism seriously as a political philosophy. A key idea in managerialism (as I’m trying to define it) is that it acknowledges that sociotechnical organizations are relevant units of political power, and is concerned with the relationship between these organizations. These organizations can be functionally specific. They can have hierarchical, non-democratic control in limited, not totalitarian ways. They check and balance each other, probably. Managerialism tends to think that organizations can be managed well, and that good management matters, politically.

This is as opposed to liberalism, which is grounded in rights of the individual, which then becomes a foundation for democracy. It’s also opposed to communitarianism, which holds the political unit of interest to be a family unit or other small community. I’m positioning managerialism as a more cybernetic political idea, as well as one more adapted to present economic conditions.

It may sound odd to hear somebody argue in favor of managerialism. I’ll admit that I am doing so tentatively, to see what works and what doesn’t. Given that a significant percentage of American political thought now is considering such baroque alternatives to liberalism as feudalism and ethnic tribalism, perhaps because liberalism everywhere has been hijacked by plutocracy, it may not be crazy to discuss alternatives.

One reason why somebody might be attracted to managerialism is that it is (I’d argue) essentially tolerant and meritocratic. Sociotechnical organizations that are organized efficiently to perform their main function need not make a lot of demands of their members besides whatever protocols are necessary for the functioning of the whole. In many cases, this should lead to a basic indifference to race, gender, and class background, from the internal perspective of the organization. As there’s good research indicating that diversity leads to greater collective intelligence in organizations, there’s a good case for tolerant policies in managerial institutions. Merit, defined relative to the needs of the particular organization, would be the privileged personal characteristic here.

I’d like to distinguish managerialism from technocracy in the following sense, which may be a matter of my own terminological invention. Technocracy is the belief that experts should run the state. It offers an expansion of centralized power. Managerialism is, I want to argue, not compatible with centralized state control. Rather, it recognizes many different spheres of life that nevertheless need to be organized to be effective. These spheres or sectors will be individually managed, perhaps by competing organizations, but regulate each other more than they require central regulation.

The way these organizations can regulate each other is Exit, in Hirschman’s sense. While the ideas of Exit, Loyalty, and Voice are most commonly used to discuss how individuals can affect the organizations they are a part of, similar ideas can function at higher scales of analysis, as organizations interact with each other. Think about international trade agreements, and sanctions.

The main reason to support managerialism is not that it is particularly just or elegant. It’s that it is more or less the case that the political structures in place now are some assemblage of sociotechnical organizations interacting with each other. Those people who have power are those with power within one or more of these organizations. And to whatever extent there is a shared ideological commitment among people, it is likely because a sociotechnical organization has been turned to the effect of spreading that ideology. This is a somewhat abstract way of saying what lots of people say in a straightforward way all the time: that certain media institutions are used to propagate certain ideologies. This managerialist framing is just intended to abstract away from the particulars in order to develop a political theory.

Managerialism as political philosophy

Technologically mediated spaces and organizations are frequently described by their proponents as alternatives to the state. From David Clark’s maxim of Internet architecture, “We reject: kings, presidents and voting. We believe in: rough consensus and running code”, to cyberanarchist efforts to bypass the state via blockchain technology, to the claims that Google and Facebook, as they mediate between billions of users, are relevant non-state actor in international affairs, to Lessig’s (1999) ever prescient claim that “Code is Law”, there is undoubtedly something going on with technology’s relationship to the state which is worth paying attention to.

There is an intellectual temptation (one that I myself am prone to) to take seriously the possibility of a fully autonomous technological alternative to the state. Something like a constitution written in source code has an appeal: it would be clear, precise, and presumably based on something like a consensus of those who participate in its creation. It is also an idea that can be frightening (Give up all control to the machines?) or ridiculous. The example of The DAO, the Ethereum ‘distributed autonomous organization’ that raised millions of dollars only to have them stolen in a technical hack, demonstrates the value of traditional legal institutions which protect the parties that enter contracts with processes that ensure fairness in their interpretation and enforcement.

It is more sociologically accurate, in any case, to consider software, hardware, and data collection not as autonomous actors but as parts of a sociotechnical system that maintains and modifies it. This is obvious to practitioners, who spend their lives negotiating the social systems that create technology. For those for whom it is not obvious, there’s reams of literature on the social embededness of “algorithms” (Gillespie, 2014; Kitchin, 2017). These themes are recited again in recent critical work on Artificial Intelligence; there are those that wisely point out that a functioning artificially intelligent system depends on a lot of labor (those who created and cleaned data, those who built the systems they are implemented on, those that monitor the system as it operates) (Kelkar, 2017). So rather than discussing the role of particular technologies as alternatives to the state, we should shift our focus to the great variety of sociotechnical organizations.

One thing that is apparent, when taking this view, is that states, as traditionally conceived, are themselves sociotechnical organizations. This is, again, an obvious point well illustrated in economic histories such as (Beniger, 1986). Communications infrastructure is necessary for the control and integration of society, let alone effective military logistics. The relationship between those industrial actors developing this infrastructure, whether it be building roads, running a postal service, laying rail or telegram wires, telephone wires, satellites, Internet protocols, and now social media–and the state has always been interesting and a story of great fortunes and shifts in power.

What is apparent after a serious look at this history is that political theory, especially liberal political theory as it developed in the 1700’s an onward as a theory of the relationship between individuals bound by social contract emerging from nature to develop a just state, leaves out essential scientific facts of the matter of how society has ever been governed. Control of communications and control infrastructure has never been equally dispersed and has always been a source of power. Late modern rearticulations of liberal theory and reactions against it (Rawls and Nozick, both) leave out technical constraints on the possibility of governance and even the constitution of the subject on which a theory of justice would have its ground.

Were political theory to begin from a more realistic foundation, it would need to acknowledge the existence of sociotechnical organizations as a political unit. There is a term for this view, “managerialism“, which, as far as I can tell is used somewhat pejoratively, like “neoliberalism”. As an “-ism”, it’s implied that managerialism is an ideology. When we talk about ideologies, what we are doing is looking from an external position onto an interdependent set of beliefs in their social context and identifying, through genealogical method or logical analysis, how those beliefs are symptoms of underlying causes that are not precisely as represented within those beliefs themselves. For example, one critiques neoliberal ideology, which purports that markets are the best way to allocate resources and advocates for the expansion of market logic into more domains of social and political life, but pointing out that markets are great for reallocating resources to capitalists, who bankroll neoliberal ideologues, but that many people who are subject to neoliberal policies do not benefit from them. While this is a bit of a parody of both neoliberalism and the critiques of it, you’ll catch my meaning.

We might avoid the pitfalls of an ideological managerialism (I’m not sure what those would be, exactly, having not read the critiques) by taking from it, to begin with, only the urgency of describing social reality in terms of organization and management without assuming any particular normative stake. It will be argued that this is not a neutral stance because to posit that there is organization, and that there is management, is to offend certain kinds of (mainly academic) thinkers. I get the sense that this offendedness is similar to the offense taken by certain critical scholars to the idea that there is such a thing as scientific knowledge, especially social scientific knowledge. Namely, it is an offense taken to the idea that a patently obvious fact entails ones own ignorance of otherwise very important expertise. This is encouraged by the institutional incentives of social science research. Social scientists are required to maintain an aura of expertise even when their particular sub-discipline excludes from its analysis the very systems of bureaucratic and technical management that its university depends on. University bureaucracies are, strangely, in the business of hiding their managerialist reality from their own faculty, as alternative avenues of research inquiry are of course compelling in their own right. When managerialism cannot be contested on epistemic grounds (because the bluff has been called), it can be rejected on aesthetic grounds: managerialism is not “interesting” to a discipline, perhaps because it does not engage with the personal and political motivations that constitute it.

What sets managerialism aside from other ideologies, however, is that when we examine its roots in social context, we do not discover a contradiction. Managerialism is not, as far as I can tell, successful as a popular ideology. Managerialism is attractive only to that rare segment of the population that work closely with bureaucratic management. It is here that the technical constraints of information flow and its potential uses, the limits of autonomy especially as it confronts the autonomies of others, the persistence of hierarchy despite the purported flattening of social relations, and so on become unavoidable features of life. And though one discovers in these situations plenty of managerial incompetence, one also comes to terms with why that incompetence is a necessary feature of the organizations that maintain it.

Little of what I am saying here is new, of course. It is only new in relation to more popular or appealing forms of criticism of the relationship between technology, organizations, power, and ethics. So often the political theory implicit in these critiques is a form of naive egalitarianism that sees a differential in power as an ethical red flag. Since technology can give organizations a lot of power, this generates a lot of heat around technology ethics. Starting from the perspective of an ethicist, one sees an uphill battle against an increasingly inscrutable and unaccountable sociotechnical apparatus. What I am proposing is that we look at things a different way. If we start from general principles about technology its role in organizations–the kinds of principles one would get from an analysis of microeconomic theory, artificial intelligence as a mathematical discipline, and so on–one can try to formulate managerial constraints that truly confront society. These constraints are part of how subjects are constituted and should inform what we see as “ethical”. If we can broker between these hard constraints and the societal values at stake, we might come up with a principle of justice that, if unpopular, may at least be realistic. This would be a contribution, at the end of the day, to political theory, not as an ideology, but as a philosophical advance.

References

Beniger, James R. “The Control Revolution: Technological and Economic Origins of the.” Information Society (1986).

Bird, Sarah, et al. “Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI.” (2016).

Gillespie, Tarleton. “The relevance of algorithms.” Media technologies: Essays on communication, materiality, and society 167 (2014).

Kelkar, Shreeharsh. “How (Not) to Talk about AI.” Platypus, 12 Apr. 2017, blog.castac.org/2017/04/how-not-to-talk-about-ai/.

Kitchin, Rob. “Thinking critically about and researching algorithms.” Information, Communication & Society 20.1 (2017): 14-29.

Lessig, Lawrence. “Code is law.” The Industry Standard 18 (1999).

Robert Post on Data vs. Dignitary Privacy

I was able to see Robert Post present his article, “Data Privacy and Dignitary Privacy: Google Spain, the Right to Be Forgotten, and the Construction of the Public Sphere”, today. My other encounter with Post’s work was quite positive, and I was very happy to learn more about his thinking at this talk.

Post’s argument was based off of the facts of the Google Spain SL v. Agencia Española de Protección de Datos (“Google Spain”) case in the EU, which set off a lot of discussion about the right to be forgotten.

I’m not trained as a lawyer, and will leave the legal analysis to the verbatim text. There were some broader philosophical themes that resonate with topics I’ve discussed on this blog andt in my other research. These I wanted to note.

If I follow Post’s argument correctly, it is something like this:

  • According to EU Directive 95/46/EC, there are two kinds of privacy. Data privacy rules over personal data, establishing control and limitations on use of it. The emphasis is on the data itself, which is property reasoned about analogously to. Dignitary privacy is about maintaining appropriate communications between people and restricting those communications that may degrade, humiliate, or mortify them.
  • EU rules about data privacy are governed by rules specifying the purpose for which data is used, thereby implying that the use of this data must be governed by instrumental reason.
  • But there’s the public sphere, which must not be governed by instrumental reason, for Habermasian reasons. The public sphere is, by definition, the domain of communicative action, where actions must be taken with the ambiguous purpose of open dialogue. That is why free expression is constitutionally protected!
  • Data privacy, formulated as an expression of instrumental reason, is incompatible with the free expression of the public sphere.
  • The Google Spain case used data privacy rules to justify the right to be forgotten, and in this it developed an unconvincing and sloppy precedent.
  • Dignitary privacy is in tension with free expression, but not incompatible with it. This is because it is based not on instrumental reason, but rather on norms of communication (which are contextual)
  • Future right to be forgotten decisions should be made on the basis of dignitary privac. This will result in more cogent decisions.

I found Post’s argument very appealing. I have a few notes.

First, I had never made the connection between what Hildebrandt (2013, 2014) calls “purpose binding” in EU data protection regulation and instrumental reason, but there it is. There is a sense in which these purpose clauses are about optimizing something that is externally and specifically defined before the privacy judgment is made (cf. Tschantz, Datta, and Wing, 2012, for a formalization).

This approach seems generally in line with the view of a government as a bureaucracy primarily involved in maintaining control over a territory or population. I don’t mean this in a bad way, but in a literal way of considering control as feedback into a system that steers it to some end. I’ve discussed the pervasive theme of ‘instrumentality run amok’ in questions of AI superintelligence here. It’s a Frankfurt School trope that appears to have made its way in a subtle way into Post’s argument.

The public sphere is not, in Habermasian theory, supposed to be dictated by instrumental reason, but rather by communicative rationality. This has implications for the technical design of networked publics that I’ve scratched the surface of in this paper. By pointing to the tension between instrumental/purpose/control based data protection and the free expression of the public sphere, I believe Post is getting at a deep point about how we can’t have the public sphere be too controlled lest we lose the democratic property of self-governance. It’s a serious argument that probably should be addressed by those who would like to strengthen rights to be forgotten. A similar argument might be made for other contexts whose purposes seem to transcend circumscription, such as science.

Post’s point is not, I believe, to weaken these rights to be forgotten, but rather to put the arguments for them on firmer footing: dignitary privacy, or the norms of communication and the awareness of the costs of violating them. Indeed, the facts behind right to be forgotten cases I’ve heard of (there aren’t many) all seem to fall under these kinds of concerns (humiliation, etc.).

What’s very interesting to me is that the idea of dignitary privacy as consisting of appropriate communication according to contextually specific norms feels very close to Helen Nissenbaum’s theory of Contextual Integrity (2009), with which I’ve become very familiar in past year through my work with Prof. Nissenbaum. Contextual integrity posits that privacy is about adherence to norms of appropriate information flow. Is there a difference between information flow and communication? Isn’t Shannon’s information theory a “mathematical theory of communication”?

The question of whether and under what conditions information flow is communication and/or data are quite deep, actually. More on that later.

For now though it must be noted that there’s a tension, perhaps a dialectical one, between purposes and norms. For Habermas, the public sphere needs to be a space of communicative action, as opposed to instrumental reason. This is because communicative action is how norms are created: through the agreement of people who bracket their individual interests to discuss collective reasons.

Nissenbaum also has a theory of norm formation, but it does not depend so tightly on the rejection of instrumental reason. In fact, it accepts the interests of stakeholders as among several factors that go into the determination of norms. Other factors include societal values, contextual purposes, and the differentiated roles associated with the context. Because contexts, for Nissenbaum, are defined in part by their purposes, this has led Hildebrandt (2013) to make direct comparisons between purpose binding and Contextual Integrity. They are similar, she concludes, but not the same.

It would be easy to say that the public sphere is a context in Nissenbaum’s sense, with a purpose, which is the formation of public opinion (which seems to be Post’s position). Properly speaking, social purposes may be broad or narrow, and specially defined social purposes may be self-referential (why not?), and indeed these self-referential social purposes may be the core of society’s “self-consciousness”. Why shouldn’t there be laws to ensure the freedom of expression within a certain context for the purpose of cultivating the kinds of public opinions that would legitimize laws and cause them to adapt democratically? We could possibly make these frameworks more precise if we could make them a little more formal and could lose some of the baggage; that would be useful theory building in line with Nissenbaum and Post’s broader agendas.

A test of this perhaps more nuanced but still teleological (indeed, instrumental, but maybe actually more properly speaking pragmatic (a la Dewey), in that it can blend several different metaethical categories) is to see if one can motivate a right to be forgotten in a public sphere by appealing to the need for communicative action, thereby especially appropriate communication norms around it, and dignitary privacy.

This doesn’t seem like it should be hard to do at all.

References

Hildebrandt, Mireille. “Slaves to big data. Or are we?.” (2013).

Hildebrandt, Mireille. “Location Data, Purpose Binding and Contextual Integrity: What’s the Message?.” Protection of Information and the Right to Privacy-A New Equilibrium?. Springer International Publishing, 2014. 31-62.

Nissenbaum, Helen. Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press, 2009.

Post, Robert, Data Privacy and Dignitary Privacy: Google Spain, the Right to Be Forgotten, and the Construction of the Public Sphere (April 15, 2017). Duke Law Journal, Forthcoming; Yale Law School, Public Law Research Paper No. 598. Available at SSRN: https://ssrn.com/abstract=2953468 or http://dx.doi.org/10.2139/ssrn.2953468

Tschantz, Michael Carl, Anupam Datta, and Jeannette M. Wing. “Formalizing and enforcing purpose restrictions in privacy policies.” Security and Privacy (SP), 2012 IEEE Symposium on. IEEE, 2012.

A short introduction to existentialism

I’ve been hinting that a different moral philosophical orientation towards technical design, one inspired by existentialism, would open up new research problems and technical possibilities.

I am trying to distinguish this philosophical approach from consequentialist approaches that aim for some purportedly beneficial change in objective circumstances and from deontological approaches that codify the rights and duties of people towards each other. Instead of these, I’m interested in a philosophy that prioritizes individual meaningful subjective experiences. While it is possible that this reduces to a form of consequentialism, because of the shift of focus from objective consequences to individual situations in the phenomenological sense, I will bracket that issue for now and return to it when the specifics of this alternative approach have been fleshed out.

I have yet to define existentialism and indeed it’s not something that’s easy to pin down. Others have done it better than I will ever do; I recommend for example the Stanford Encyclopedia of Philosophy article on the subject. But here is what I am getting at by use of the term, in a nutshell:

In the mid-19th century, there was (according to Badiou) a dearth of good philosophy due to the new prestige of positivism, on the one hand, and the high quality of poetry, on the other. After the death of Hegel, who claimed to have solved all philosophical problems through his phenomenology of Spirit and its corollary, the science of Logic, arts and sciences became independent of each other. And as it happens during such periods, the people (of Europe, we’re talking about now) became disillusioned. The sciences undermined Christian metanarratives that had previously given life its meaningful through the promise of a heavenly afterlife to those who lived according to moral order. There was what has been called by subsequent scholars a “nihilism crisis”.

Friedrich Nietzsche began writing and shaking things up by proposing a new radical form of individualism that placed self-enhancement over social harmony. An important line of argumentation showed that the moral assumptions of conventional philosophy in his day contained contradictions and false promises that would lead the believer to either total disorientation or life-negating despair. What was needed was an alternative, and Nietzsche began working on one. It made the radical step of not grounding morality in abolishing suffering (which he believed was a necessary part of life) but rather in life itself. In his conception, what was most characteristic of life was the will to power, which has been characterized (by Bernard Reginster, I believe) as a second-order desire to overcome resistance in the pursuit of other, first-order desires. In other words, Nietzsche’s morality is based on the principle that the greatest good in life is to overcome adversity.

Nietzsche is considered one of the fathers of existentialist thought (though he is also considered many other things, as he is a writer known for his inconsistency). Another of these foundational thinkers is Søren Kierkegaard. Now that I look him up, I see that his life falls within what Badiou characterizes” the “age of poets” and/or the darkp age of 19th century philosophy, and I wonder if Badiou would consider him an exception. A difficult thing about Kierkegaard in terms of his relevance to today’s secular academic debates is that he was explicitly and emphatically working within a Christian framework. Without going too far into it, it’s worth noting a couple things about his work. In The Sickness Unto Death (1849), Kierkegaard also deals with the subject of despair and its relationship to ones capabilities. For Kierkegaard, a person is caught between their finite (which means “limited” in this context) existence with all of its necessary limitations and their desire to transcend these limitations and attain the impossible, the infinite. In his terminology, he discusses the finite self and the infinite self, because his theology allows for the idea that there is an infinite self, which is God, and that the important philosophical crisis is about establishing ones relationship to God despite the limitations of ones situation. Whereas Nietzsche proposes a project of individual self-enhancement to approach what was impossible, Kierkegaard’s solution is a Christian one: to accept Jesus and God’s love as the bridge between infinite potential and ones finite existence. This is not a universally persuasive solution, though I feel it sets up the problem rather well.

The next great existentialist thinker, and indeed to one who promoted the term “existentialism” as a philosophical brand, is
Jean-Paul Sartre. However, I find Sartre uninspiring and will ignore his work for now.

On the other hand, Simone de Beauvoir, who was closely associated with Sartre, has one of the best books on ethics and the human condition I’ve ever read, the highly readable The Ethics of Ambiguity (1949), the Marxists have kindly put on-line for your reading pleasure. This work lays out the ethical agenda of existentialism in phenomenological terms that resonate well with more contemporary theory. The subject finds itself in a situation (cf. theories of situated learning common now in HCI), in a place and time and a particular body with certain capacities. What is within the boundaries of their conscious awareness and capacity for action is their existence, and they are aware that beyond the boundaries of their awareness is Being, which is everything else. And what the subject strives for is to expand their existence in being, subsuming it. One can see how this synthesizes the positions of Nietzsche and Kierkegaard. Where de Beauvoir goes farther is the demonstration of how one can start from this characterization of the human condition and derive from it an substantive ethics about how subjects should treat each other. It is true that the subject can never achieve the impossible of the infinite…alone. However, by investing themselves through their “projects”, subjects can extend themselves. And when these projects involve the empowerment of others, this allows a finite subject to extend themselves through a larger and less egoistic system of life.

De Beauvoirian ethics are really nice because they are only gently prescriptive, are grounded very closely in the individual’s subjective experience of their situation, and have social justice implications that are appealing to many contemporary liberal intellectuals without grounding these justice claims in resentment or zero-sum claims for reparation or redistribution. Rather, its orientation is the positive-sum, win-win relationship between the one who empowers another and the one being empowered. This is the relationship, not of master and slave, but of master and apprentice.

When I write about existentialism in design, I am talking about using an ethical framework similar to de Beauvoir’s totally underrated existentialist ethics and using them as principles for technical design.

References

Brown, John Seely, Allan Collins, and Paul Duguid. “Situated cognition and the culture of learning.” Educational researcher 18.1 (1989): 32-42.

De Beauvoir, Simone. The ethics of ambiguity, tr. Citadel Press, 1948.

Lave, Jean, and Etienne Wenger. Situated learning: Legitimate peripheral participation. Cambridge university press, 1991.

Subjectivity in design

One of the reason why French intellectuals have developed their own strange way of talking is because they have implicitly embraced a post-Heideggerian phenomenological stance which deals seriously with the categories of experience of the individual subject. Americans don’t take this sort of thing so seriously because our institutions have been more post-positivist and now, increasingly, computationalist. If post-positivism makes the subject of science the powerful bureaucratic institution able leverage statistically sound and methodologically responsible survey methodology, computationalism makes the subject of science the data analyst operating a cloud computing platform with data sourced from wherever. These movements are, probably, increasingly alienating to “regular people”, including humanists, who are attracted to phenomenology precisely because they have all the tools for it already.

To the extent that humanists are best informed about what it really means to live in the world, their position must be respected. It is really out of deference to the humble (or, sometimes, splendidly arrogant) representatives of the human subject as such that I have written about existentialism in design, which is really an attempt to ground technical design in what is philosophically “known” about the human condition.

This approach differs from “human centered design” importantly because human centered design wisely considers design to be an empirically rigorous task that demands sensitivity to the particular needs of situated users. This is wise and perfectly fine except for one problem: it doesn’t scale. And as we all know, the great and animal impulse of technology progress, especially today, is to develop the one technology that revolutionizes everything for everyone, becoming new essential infrastructure that reveals a new era of mankind. Human centered designers have everything right about design except for the maniacal ambition of it, without which it will never achieve technology’s paramount calling. So we will put it to one side and take a different approach.

The problem is that computationalist infrastructure projects, and by this I’m referring to the Googles, the Facebooks, the Amazons, Tencents, the Ali Babas, etc., are essentially about designing efficient machines and so they ultimately become about objective resource allocation in one sense or another. The needs of the individual subject are not as relevant to the designers h of these machines as are the behavioral responses of their users to their use interfaces. What will result in more clicks, more “conversions”? Asking users what they really want on the scale that it would affect actual design is secondary and frivolous when A/B s testing can optimize practical outcomes as efficiently as they do.

I do not mean to cast aspersions at these Big Tech companies by describing their operations so baldly. I do not share the critical perspective of many of my colleagues who write as if they have discovered, for the first time, that corporate marketing is hypocritical and that businesses are mercenary. This is just the way things are; what’s more, the engineering accomplishments involved are absolutely impressive and worth celebrating, as is the business management.

What I would like to do is propose that a technology of similar scale can be developed according to general principles that nevertheless make more adept use of what is known about the human condition. Rather than be devoted to cheap proxies of human satisfaction that address his or her objective condition, I’m proposing a service that delivers something tailored to the subjectivity of the user.