Digifesto

Tag: fairness

Notes on Clark Kerr’s “The ‘City of Intellect’ in a Century for Foxes?”, in The Uses of the University 5th Edition

I am in my seventh and absolutely, definitely last year of a doctoral program and so have many questions about the future of higher education and whether or not I will be a part of it. For insight, I have procured an e-book copy of Clark Kerr’s The Uses of the University (5th Edition, 2001). Clark Kerr was the 20th President of University of California system and became famous among other things for his candid comments on university administration, which included such gems as

“I find that the three major administrative problems on a campus are sex for the students, athletics for the alumni and parking for the faculty.”

…and…

“One of the most distressing tasks of a university president is to pretend that the protest and outrage of each new generation of undergraduates is really fresh and meaningful. In fact, it is one of the most predictable controversies that we know. The participants go through a ritual of hackneyed complaints, almost as ancient as academe, while believing that what is said is radical and new.”

The Uses of the University is a collection of lectures on the topic of the university, most of which we given in the second half of the 20th century. The most recent edition contains a lecture given in the year 2000, after Kerr had retired from administration, but anticipating the future of the university in the 21st century. The title of the lecture is “The ‘City of Intellect’ in a Century for Foxes?”, and it is encouragingly candid and prescient.

To my surprise, Kerr approaches the lecture as a forecasting exercise. Intriguingly, Kerr employs the hedgehog/fox metaphor from Isaiah Berlin in a lecture about forecasting five years before the publication of Tetlock’s 2005 book Expert Political Judgment (review link), which used the fox/hedgehog distinction to cluster properties that were correlated with political expert’s predictive power. Kerr’s lecture is structured partly as the description of a series of future scenarios, reminiscent of scenario planning as a forecasting method. I didn’t expect any of this, and it goes to show perhaps how pervasive scenario thinking was as a 20th century rhetorical technique.

Kerr makes a number of warning about the university in the 20th century, especially with respect to the glory of the university in the 20th century. He makes a historical case for this: universities in the 20th century thrived on new universal access to students, federal investment in universities as the sites of basic research, and general economic prosperity. He doesn’t see these guaranteed in the 20th century, though he also makes the point that in official situations, the only thing a university president should do is discuss the past with pride and the future with apprehension. He has a rather detailed analysis of the incentives guiding this rhetorical strategy as part of the lecture, which makes you wonder how much salt to take the rest of the lecture with.

What are the warnings Kerr makes? Some are a continuation of the problems universities experienced in the 20th century. Military and industrial research funding changed the roles of universities away from liberal arts education into research shop. This was not a neutral process. Undergraduate education suffered, and in 1963 Kerr predicted that this slackening of the quality of undergraduate education would lead to student protests. He was half right; students instead turned their attention externally to politics. Under these conditions, there grew to be a great tension between the “internal justice” of a university that attempted to have equality among its faculty and the permeation of external forces that made more of the professiorate face outward. A period of attempted reforms throguh “participatory democracy” was “a flash in the pan”, resulting mainly in “the creation of courses celebrating ethnic, racial, and gender diversities. “This experience with academic reform illustrated how radical some professors can be when they look at the external world and how conservative when they look inwardly at themselves–a split personality”.

This turn to industrial and military funding and the shift of universities away from training in morality (theology), traditional professions (medicine, law), self-chosen intellectual interest for its own sake, and entrance into elite society towards training for the labor force (including business administration and computer science) is now quite old–at least 50 years. Among other things, Kerr predicts, this means that we will be feeling the effects of the hollowing out of the education system that happened as higher education deprioritized teaching in favor of research. The baby boomers who went through this era of vocational university education become, in Kerr’s analysis, an enormous class of retirees by 2030, putting new strain on the economy at large. Meanwhile, without naming computers and the Internet, Kerr acknowledged that the “electronic revolution” is the first major change to affect universities for three hundred years, and could radically alter their role in society. He speaks highly of Peter Drucker, who in 1997 was already calling the university “a failure” that would be made obsolete by long-distance learning.

In an intriguing comment on aging baby boomers, which Kerr discusses under the heading “The Methuselah Scenario”, is that the political contest between retirees and new workers will break down partly along racial lines: “Nasty warfare may take place between the old and the young, parents and children, retired Anglos and labor force minorities.” Almost twenty years later, this line makes me wonder how much current racial tensions are connected to age and aging. Have we seen the baby boomer retirees rise as a political class to vigorously defend the welfare state from plutocratic sabotage? Will we?

Kerr discusses the scenario of the ‘disintegration of the integrated university’. The old model of medicine, agriculture, and law integrated into one system is coming apart as external forces become controlling factors within the university. Kerr sees this in part as a source of ethical crises for universities.

“Integration into the external world inevitably leads to disintegration of the university internally. What are perceived by some as the injustices in the external labor market penetrate the system of economic rewards on campus, replacing policies of internal justice. Commitments to external interests lead to internal conflicts over the impartiality of the search for truth. Ideologies conflict. Friendships and loyalties flow increasingly outward. Spouses, who once held the academic community together as a social unit, now have their own jobs. “Alma Mater Dear” to whom we “sing a joyful chorus” becomes an almost laughable idea.”

A factor in this disintegration is globalization, which Kerr identifies with the mobility of those professors who are most able to get external funding. These professors have increased bargaining power and can use “the banner of departmental autonomy” to fight among themselves for industrial contracts. Without oversight mechanisms, “the university is helpless in the face of the combined onslaught of aggressive industry and entrepreneurial faculty members”.

Perhaps most fascinating for me, because it resonates with some of my more esoteric passions, is Kerr’s section on “The fractionalization of the academic guild“. Subject matter interest breaks knowledge into tiny disconnected topics–"Once upon a time, the entire academic enterprise originated in and remained connected to philosophy." The tension between "internal justice" and the "injustices of the external labor market" creates a conflict over monetary rewards. Poignantly, "fractionalization also increases over differing convictions about social justice, over whether it should be defined as equality of opportunity or equality of results, the latter often taking the form of equality of representation. This may turn out to be the penultimate ideological battle on campus."

And then:

The ultimate conflict may occur over models of the university itself, whether to support the traditional or the “postmodern” model. The traditional model is based on the enlightenment of the eighteenth century–rationality, scientific processes of thought, the search for truth, objectivity, “knowledge for its own sake and for its practical applications.” And the traditional university, to quote the Berkeley philosopher John Searle, “attempts to be apolitical or at least politically neutral.” The university of postmodernism thinks that all discourse is political anyway, and it seeks to use the university for beneficial rather than repressive political ends… The postmodernists are attempting to challenge certain assumptions about the nature of truth, objectivity, rationality, reality, and intellectual quality.”

… Any further politicization of the university will, of course, alienate much of the public at large. While most acknowledge that the traditional university was partially politicized already, postmodernism will further raise questions of whether the critical function of the university is based on political orientation rather than on nonpolitical scientific analysis.”

I could go on endlessly about this topic; I’ll try to be brief. First, as per Lyotard’s early analysis of the term, postmodernism is as much as result of the permeation of the university by industrial interests as anything else. Second, we are seeing, right now today in Congress and on the news etc., the eroded trust that a large portion of the public has of university “expertise”, as they assume (having perhaps internalized a reductivist version of the postmodern message despite or maybe because they were being taught by teaching assistants instead of professors) that the professoriate is politically biased. And now the students are in revolt over Free Speech again as a result.

Kerr entertains for a paragraph the possibility of a Hobbesian doomsday free-for-all over the university before considering more mundane possibilities such as a continuation of the status quo. Adapting to new telecommunications (including “virtual universities”), new amazing discoveries in biological sciences, and higher education as a step in mid-career advancement are all in Kerr’s more pragmatic view of the future. The permeability of the university can bring good as well as bad as it is influenced by traffic back and forth across its borders. “The drawbridge is now down. Who and what shall cross over it?”

Kerr counts three major wildcards determining the future of the university. The first is overall economic productivity, the second is fluctuations in returns to a higher education. The third is the United States’ role in the global economy “as other nations or unions of nations (for example, the EU) may catch up with and even surpass it. The quality of education and training for all citizens will be to this contest. The American university may no longer be supreme.” Fourth, student unrest turning universities into the “independent critic”. And fifth, the battles within the professoriate, “over academic merit versus social justice in treatment of students, over internal justice in the professional reward system versus the pressures of external markets, over the better model for the university–modern or post-modern.”

He concludes with three wishes for the open-minded, cunning, savvy administrator of the future, the “fox”:

  1. Careful study of new information technologies and their role.
  2. “An open, in-depth debate…between the proponents of the traditional and the postmodern university instead of the sniper shots of guerilla warfare…”
  3. An “in-depth discussion…about the ethical systems of the future university”. “Now the ethical problems are found more in the flow of contacts between the academic and the external worlds. There have never been so many ethical problems swirling about as today.”

Pondering “use privacy”

I’ve been working carefully with Datta et al.’s “Use Privacy” work (link), which makes a clear case for how a programmatic, data-driven model may be statically analyzed for its use of a proxy of a protected variable, and repaired.

Their system has a number of interesting characteristics, among which are:

  • The use of a normative oracle for determining which proxy uses are prohibited.
  • A proof that there is no coherent definition of proxy use which has all of a set of very reasonable properties defined over function semantics.

Given (2), they continue with a compelling study of how a syntactic definition of proxy use, one based on the explicit contents of a function, can support a system of detecting and repairing proxies.

My question is to what extent the sources of normative restriction on proxies (those characterized by the oracle in (1)) are likely to favor syntactic proxy use restrictions, as opposed to semantic ones. Since ethicists and lawyers, who are the purported sources of these normative restrictions, are likely to consider any technical system a black box for the purpose of their evaluation, they will naturally be concerned with program semantics. It may be comforting for those responsible for a technical program to be able to, in a sense, avoid liability by assuring that their programs are not using a restricted proxy. But, truly, so what? Since these syntactic considerations do not make any semantic guarantees, will they really plausibly address normative concerns?

A striking result from their analysis which has perhaps broader implications is the incoherence of a semantic notion of proxy use. Perhaps sadly but also substantively, this result shows that a certain plausible normative is impossible for a system to fulfill in general. Only restricted conditions make such a thing possible. This seems to be part of a pattern in these rigorous computer science evaluations of ethical problems; see also Kleinberg et al. (2016) on how it’s impossible to meet several plausible definitions of “fairness” in the risk-assessment scores across social groups except under certain conditions.

The conclusion for me is that what this nobly motivated computer science work reveals is that what people are actually interested in normatively is not the functioning of any particular computational system. They are rather interested in social conditions more broadly, which are rarely aligned with our normative ideals. Computational systems, by making realities harshly concrete, are disappointing, but it’s a mistake to make that a disappointment with the computing systems themselves. Rather, there are mathematical facts that are disappointing regardless of what sorts of systems mediate our social world.

This is not merely a philosophical consideration or sociological observation. Since the the interpretation of laws are part of the process of informing normative expectations (as in a normative oracle), it is an interesting an perhaps open question how lawyers and judges, in their task of legal interpretation, make use of the mathematical conclusions about normative tradeoffs being offered up by computer scientists.

References

Datta, Anupam, et al. “Use Privacy in Data-Driven Systems: Theory and Experiments with Machine Learnt Programs.” arXiv preprint arXiv:1705.07807 (2017).

Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent trade-offs in the fair determination of risk scores.” arXiv preprint arXiv:1609.05807 (2016).

On achieving social equality

When evaluating a system, we have a choice of evaluating its internal functions–the inside view–or evaluating its effects situated in a larger context–the outside view.

Decision procedures (whether they are embodied by people or performed in concert with mechanical devices–I don’t think this distinction matters here) for sorting people are just such a system. If I understand correctly, the question of which principles animate antidiscrimination law hinge on this difference between the inside and outside view.

We can look at a decision-making process and evaluate whether as a procedure it achieves its goals of e.g. assigning credit scores without bias against certain groups. Even including processes of the gathering of evidence or data in such a system, it can in principle be bounded and evaluated by its ability to perform its goals. We do seem to care about the difference between procedural discrimination and procedural nondiscrimination. For example, an overtly racist policy that ignores truly talent and opportunity seems worse than a bureaucratic system that is indifferent to external inequality between groups that then gets reflected in decisions made according to other factors that are merely correlated with race.

The latter case has been criticized in the outside view. The criticism is captured by the phrasing that “algorithms can reproduce existing biases”. The supposedly neutral algorithm (which can, again, be either human or machine) is not neutral in its impact because in making its considerations of e.g. business interest are indifferent to the conditions outside it. The business is attracted to wealth and opportunity, which are held disproportionately by some part of the population, so the business is attracted to that population.

There is great wisdom in recognizing that institutions that are neutral in their inside view will often reproduce bias in the outside view. But it is incorrect to therefore conflate neutrality in the inside view with a biased inside view, even though their effects may be under some circumstances the same. When I say it is “incorrect”, I mean that they are in fact different because, for example, if the external conditions of procedurally neutral institution change, then it will reflect those new conditions. A procedurally biased institution will not reflect those new conditions in the same way.

Empirically it is very hard to tell when an institution is being procedurally neutral and indeed this is the crux of an enormous amount of political tension today. The first line of defense of an institution blamed of bias is to claim that their procedural neutrality is merely reflecting environmental conditions outside of its control. This is unconvincing for many politically active people. It seems to me that it is now much more common for institutions to avoid this problem by explicitly declaring their bias. Rather than try to accomplish the seemingly impossible task of defending their rigorous neutrality, it’s easier to declare where one stands on the issue of resource allocation globally and adjust ones procedure accordingly.

I don’t think this is a good thing.

One consequence of evaluating all institutions based on their global, “systemic” impact as opposed to their procedural neutrality is that it hollows out the political center. The evidence is in that politics has become more and more polarized. This is inevitable if politics becomes so explicitly about maintaining or reallocating resources as opposed to about building neutrally legitimate institutions. When one party in Congress considers a tax bill which seems designed mainly to enrich ones own constituencies at the expense of the other’s things have gotten out of hand. The idea of a unified idea of ‘good government’ has been all but abandoned.

An alternative is a commitment to procedural neutrality in the inside view of institutions, or at least some institutions. The fact that there are many different institutions that may have different policies is indeed quite relevant here. For while it is commonplace to say that a neutral institution will “reproduce existing biases”, “reproduction” is not a particularly helpful word here. Neither is “bias”. What we can say more precisely is that the operations of procedurally neutral institution will not change the distribution of resources even though they are unequal.

But if we do not hold all institutions accountable for correcting the inequality of society, isn’t that the same thing as approving of the status quo, which is so unequal? A thousand times no.

First, there’s the problem that many institutions are not, currently, procedurally neutral. Procedural neutrality is a higher standard than what many institutions are currently held to. Consider what is widely known about human beings and their implicit biases. One good argument for transferring decision-making authority to machine learning algorithms, even standard ones not augmented for ‘fairness’, is that they will not have the same implicit, inside, biases as the humans that currently make these decisions.

Second, there’s the fact that responsibility for correcting social inequality can be taken on by some institutions that are dedicated to this task while others are procedurally neutral. For example, one can consistently believe in the importance of a progressive social safety net combined with procedurally neutral credit reporting. Society is complex and perhaps rightly has many different functioning parts; not all the parts have to reflect socially progressive values for the arc of history to bend towards justice.

Third, there is reason to believe that even if all institutions were procedurally neutral, there would eventually be social equality. This has to do with the mathematically bulletproof but often ignored phenomenon of regression towards the mean. When values are sampled from a process at random, their average will approach the mean of the distribution as more values are accumulated. In terms of the allocation of resources in a population, there is some random variation in the way resources flow. When institutions are fair, inequality in resource allocation will settle into an unbiased distribution. While their may continue to be some apparent inequality due to disorganized heavy tail effects, these will not be biased, in a political sense.

Fourth, there is the problem of political backlash. Whenever political institutions are weak enough to be modified towards what is purported to be a ‘substantive’ or outside view neutrality, that will always be because some political coalition has attained enough power to swing the pendulum in their favor. The more explicit they are about doing this, the more it will mobilize the enemies of this coallition to try to swing the pendulum back the other way. The result is war by other means, the outcome of which will never be fair, because in war there are many who wind up dead or injured.

I am arguing for a centrist position on these matters, one that favors procedural neutrality in most institutions. This is not because I don’t care about substantive, “outside view” inequality. On the contrary, it’s because I believe that partisan bickering that explicitly undermines the inside neutrality of institutions undermines substantive equality. Partisan bickering over the scraps within narrow institutional frames is a distraction from, for example, the way the most wealthy avoid taxes while the middle class pays even more. There is a reason why political propaganda that induces partisan divisions is a weapon. Agreement about procedural neutrality is a core part of civic unity that allows for collective action against the very most abusively powerful.

References

Zachary C. Lipton, Alexandra Chouldechova, Julian McAuley. “Does mitigating ML’s disparate impact require disparate treatment?” 2017

frustrations with machine ethics

It’s perhaps because of the contemporary two cultures problem of tech and the humanities that machine ethics is in such a frustrating state.

Today I read danah boyd’s piece in The Message about technology as an arbiter of fairness. It’s more baffling conflation of data science with neoliberalism. This time, the assertion was that the ideology of the tech industry is neoliberalism hence their idea of ‘fairness’ is individualist and against social fabric. It’s not clear what backs up these kinds of assertions. They are more or less refuted by the fact that industrial data science is obsessed with our network of ties for marketing reasons. If anybody understands the failure of the myth of the atomistic individual, it’s “tech folks,” a category boyd uses to capture, I guess, everyone from marketing people at Google to venture capitalists to startup engineers to IBM researchers. You know, the homogenous category that is “tech folks.”

This kind of criticism makes the mistake of thinking that a historic past is the right way to understand a rapidly changing present that is often more technically sophisticated than the critics understand. But critical academics have fallen into the trap of critiquing neoliberalism over and over again. One problem is that tech folks don’t spend a ton of time articulating their ideology in ways that are convenient for pop culture critique. Often their business models require rather sophisticated understandings of the market, etc. that don’t fit readily into that kind of mold.

What’s needed is substantive progress in computational ethics. Ok, so algorithms are ethically and politically important. What politics would you like to see enacted, and how do you go about implementing that? How do you do it in a way that attracts new users and is competitively funded so that it can keep up with the changing technology with which we use to access the web? These are the real questions. There is so little effort spent trying to answer them. Instead there’s just an endless series of op-ed bemoaning the way things continue to be bad because it’s easier than having agency about making things better.