Digifesto

Category: academia

Reflections on the Gebru/Google dismissal

I’ve decided to write up some reactions to the dismissal of Dr. Gebru from Google’s Ethical AI team. I have hesitated thus far because the issues revolve around a particular person who has a right to privacy, because of the possible professional consequences of speaking out (this research area is part of my professional field and the parties involved, including Google’s research team and Dr. Gebru, are all of greater stature in it than myself), because there is much I don’t know about the matter (I have no inside view of the situation at Google, for example), because the facts of the case look quite messy to me, with many different issues at stake, and because the ethical issues raised by the case are substantive and difficult. It has also been a time with pressing personal responsibilities and much needed holiday rest.

I’m also very aware that one framing of the event is that it is about diversity and representation within the AI ethics research community. There are some that believe that white etc. men are over-represented in the field. Implicitly, if I write publicly about this situation, representing, as it were, myself, I am part of that problem. More on that in a bit.

Despite all of these reasons, I think it is best to write something. The event has been covered by many mainstream news outlets, and Dr. Gebru has been about as public as is possible with her take on the situation. She is, I believe, a public figure in this respect. I’ve written before on related topics and controversies within this field and have sometimes been told by others that they have found my writing helpful. As for the personal consequences to myself, I try to hold myself to a high standard of courage in my research work and writing. I wouldn’t be part of this technology ethics field if I did not.

So what do I think?

First, I think there has been a lot of thoughtful coverage of the incident by others. Here are some links to that work. So far, Hanna and Whitaker‘s take is the most forceful in its analysis of the meaning of the incident for a “crisis in AI”. In their analysis:

  • There is a crisis, which involves:
    • A mismatch between those benefiting from and creating AI — “the corporations and the primarily white male researchers and developers” — and those most likely to be harmed by AI — “BIPOC people, women, religious and gender minorities, and the poor” because of “structural barriers”. A more diverse research community is needed to “[center] the perspectives and experiences of those who bear the harms of these technologies.”
    • The close ties between tech companies and ostensibly independent academic institutions that homogenize the research community, obscure incentives, and dull what might be a more critical research agenda.
  • To address this crisis:
    • Tech workers should form an inclusive union that pushes back on Big Tech for ethical concerns.
    • Funding for independent critical research, with greater guaranteed access to company resources, should be raised through a tax on Big Tech.
    • Further regulations should be passed to protect whistleblowers, prevent discrimination, consumer privacy and the contestability of AI systems.

These lines of argument capture most of what I’ve seen more informally in Twitter conversations about this issue. As far as their practical recommendations go, I think a regulatory agency for Big Tech, analogous to the Securities Exchange Commission for the financial sector, with a federal research agency analogous to the Office of Financial Research, is the right way to go on this. I’m more skeptical about the idea of a tech workers union, but that this not the main focus of this post. This post is about Dr. Gebru’s dismissal and its implications.

I think it’s best if I respond to the situation we a series of questions.

First, was Dr. Gebru wrongfully terminated from Google? Wrongful termination is when an employer terminates a contract with an employee in retaliation for an anti-discrimination or whistleblowing action. The heart of the matter is that Dr. Gebru’s dismissal “smells like” wrongful termination: Dr. Gebru was challenging Google’s diversity programs internally; she was reporting environmental costs of AI in her research in a way that was perhaps like whistleblowing. The story is complicated by the fact that she was negotiating with Google, with the possibility of resignation as leverage, when she was terminated.

I’m not a lawyer. I have come to appreciate the importance of the legal system rather late in my research career. Part of that appreciation is of how the law has largely anticipated the ethical issues raised by “AI” already. I am surprised, however, that the phrase “wrongful termination” has not been raised in journalism covering Dr. Gebru’s dismissal. It seems like the closest legal analog. Could, say, a progressively orientated academic legal clinic help Dr. Gebru sue Google over this? Does she have a case?

These are not idle questions. If the case is to inform better legal protection of corporate AI researchers and other ‘tech workers’, then it is important to understand the limits of current wrongful termination law, whether these limits cover the case of Dr. Gebru’s dismissal, and if not, what expansions to this law would be necessary to cover it.

Second, what is corporate research (and corporate funded research) for? The field of “Ethical AI” has attracted people with moral courage and conviction who probably could be doing other things if they did not care so much. Many people enter academic research hoping that they can somehow, through their work, make the world a better place. The ideal of academic freedom is that it allows researchers to be true to their intellectual commitments, including their ethical commitments. It is probably true that “critical” scholarship survives better in the academic environment. But what is corporate research for? Should we reasonably expect a corporation’s research arm to challenge that corporation’s own agendas?

I’ve done corporate research. My priorities were pretty clear in that context: I was supposed to make the company I worked for look smart. I was supposed to develop new technical prototypes that could be rolled into products. I was supposed to do hard data wrangling and analysis work to suss out what kinds of features would be possible to build. My research could make the world a better place, but my responsibility to my employer was to make it a better place by improving our company’s offerings.

I’ve also done critical work. Critical work tends not to pay as well as corporate research, for obvious reasons. I’m mainly done this from academic positions, or as a concerned citizen writing on my own time. It is striking that Hanna and Whitaker’s analysis follows through to the conclusion that critical researchers want to get paid. Their rationale is that society should reinvest the profits of Big Tech companies into independent research that focuses on reducing Big Tech harms. This would be like levying a tax on Big Tobacco to fund independent research into the health effects of smoking. This really does sound like a good idea to me.

But this idea would sound good to me even without Dr. Gebru’s dismissal from Google. To conflate the two issues muddies the water for me. There is one other salient detail: some of the work that brought Dr. Gebru into research stardom was now well-known audits of facial recognition technology developed by IBM and Microsoft. Google happily hired her. I wonder if Google would have minded if Dr. Gebru continued to do critical audits of Microsoft and IBM from her Google position. I expect Google would have been totally fine with this: one purpose of corporate research could be digging up dirt on your competition! This implies that it’s not entirely true that you can’t do good critical work from a corporate job. Maybe this kind of opposition research should be encouraged and protected (by making Big Tech collusion to prevent such research illegal).

Third, what is the methodology of AI ethics research? There are two schools of thought in research. There’s the school of thought that what’s most important about research is the concrete research question and that any method that answers the research question will do. Then there’s the school of thought that says what’s most important about research is the integrity of research methods and institutions. I’m of the latter school of thought, myself.

One thing that is notable about top-tier AI ethics research today is the enormously broad interdisciplinary range of its publication venues. I would argue that this interdisciplinarity is not intellectually coherent but rather reflects the broad range of disciplinary and political interests that have been able to rally around the wholly ambiguous idea of “AI ethics”. It doesn’t help that key terms within the field, such as “AI” and “algorithm”, are distorted to fit whatever agenda researchers want for them. The result is a discursive soup which lacks organizing logic.

In such a confused field, it’s not clear what conditions research needs to meet in order to be “good”. In practice, this means that the main quality control and/or gatekeeping mechanism, the publishing conferences, operate through an almost anarchic process of peer review. Adjacent to this review process is the “disciplinary collapse” of social media, op-eds, and whitepapers, which serve various purposes of self-promotion, activism/advocacy, and marketing. There is little in this process to incentivize the publication of work that is correct, or to set the standards of what that would be.

This puts AI ethics researchers in a confusing position. Google, for example, can plausible set its own internal standards for research quality because the publication venues have not firmly set their own. Was Dr. Gebru’s controversial paper up to Google’s own internal publication standards, as Google has alleged? Or did they not want their name on it only because it made them look bad? I honestly don’t know. But even though I have written quite critically about corporate AI “ethics” approaches before, I actually would not be surprised if a primarily “critical” researcher did not do a solid literature review of the engineering literature on AI energy costs before writing a piece about it, because the epistemic standards of critical scholarship and engineering are quite different.

There has been a standard floated implicitly or explicitly by some researchers in the AI ethics space. I see Hanna and Whitaker as aligned with this standard and will borrow their articulation. In this view, the purpose of AI ethics research is to surface the harms of AI so that they may be addressed. The reason why these harms are not obvious to AI practitioners already is the lack of independent critical scholarship by women, BIPOC, the poor, and other minorities. Good AI ethics work is therefore work done by these minorities such that it expresses their perspective, critically revealing faults in AI systems.

Personally, I have a lot of trouble with this epistemic standard. According to it, I really should not be trying to work on AI ethics research. I am simply, by fault of my subject position, unable to do good work. Dr. Gebru, a Black woman, on the other hand, will always do good work according to this standard.

I want to be clear that I have some of Dr. Gebru’s work and believe it deserves all of its accolades for reasons that are not conditional on her being a Black woman. I also understand why her subject position has primed her to do the kind of work that she has done; she is a trailblazer because of who she is. But if the problem faced by the AI ethics community is that its institutions have blended corporate and academic research interests so much that the incentives are obscure and the playing field benefits the corporations, who have access to greater resources and so on, then this problem will not be solved by allowing corporations to publish whatever they want as long as the authors are minorities. This would be falling into the trap of what Nancy Fraser calls progressive neoliberalism, which incentivizes corporate tokenization of minorities. (I’ve written about this before.)

Rather, the way to level the playing field between corporate research and independent or academic research is to raise the epistemic standard of the publication venues in a way that supports independent or academic research. Hanna and Whitaker argue that “[r]esearchers outside of corporate environments must be guaranteed greater access to technologies currently hidden behind claims of corporate secrecy, such as access to training data sets, and policies and procedures related to data annotation and content moderation.” Nobody, realistically, is going to guarantee outside researchers access to corporate secrets. However, research publication venues (like conferences) can change their standards to mandate open science practices: access to training data sets, reproducibility of results, no dependence on corporate secrets, and so on.

A tougher question for AI ethics research in particular is the question of how to raise epistemic standards for normative research in a way that doesn’t beg the question on interpretations of social justice or devolve into agonistic fracturing on demographic grounds. There are of course academic disciplines with robust methods for normative work; they are not always in communication with each other. I don’t think there’s going to be much progress in the AI ethics field until a sufficient synthesis of feminist epistemology and STEM methods has been worked out. I fear that is not going to happen quickly because it would require dropping some of what’s dear to situated epistemologies of the progressive AI ethics wing. But I may be wrong there. (There was some work along these lines by methodologists some years ago under the label “Human-Centered Data Science”.)

Lastly, whatever happened to the problem of energy costs of AI, and climate change? To me, what was perhaps most striking about the controversial paper at the heart of Dr. Gebru’s dismissal was that it wasn’t primarily about representation of minorities. Rather, it was (I’ve heard–I haven’t read the paper yet) about energy costs of AI, which is something that, yes, even white men can be concerned about. If I were to give my own very ungenerous, presumptuous, and truly uninformed interpretation of what the goings-on at Google were all about, I would put it this way: Google hired Dr. Gebru to do progressive hit pieces on competitor’s AI products like she had done for Microsoft and IBM, and to keep the AI ethics conversation firmly in the territory of AI biases. Google has the resources to adjust its models to reduce these harms, get ahead of AI fairness regulation, and compete on wokeness to the woke market segments. But Dr. Gebru’s most recent paper reframes the AI ethics debate in terms of a universal problem of climate change which has a much broader constituency, and which is actually much closer to Google’s bottom line. Dr. Gebru has the star power to make this story go mainstream, but Google wants to carve out its own narrative here.

It will be too bad if the fallout of Dr. Gebru’s dismissal is a reversion of the AI ethics conversation back to the well-trod questions of researcher diversity, worker protection, and privacy regulation, when the energy cost and climate change questions provide a much broader base of interest from which to refine and consolidate the AI ethics community. Maybe we should be asking: what standards should conferences be holding researchers to when they make claims about AI energy costs? What are the standards of normative argumentation for questions of carbon emission, which necessarily transcend individual perspectives, while of course also impacting different populations disparately? These are questions everybody should care about.

EDIT: I’m sensitive to the point that suggesting that Big Tech’s shift of the frame of AI ethics towards ‘fairness’ in a socially progressive sense is somewhat disingenuous may be seen as a rejection of those progressive politics, especially in the absence of evidence. I don’t reject those politics. This journalistic article by Karen Hao provides some evidence as to how another Big Tech company, Facebook, has deliberately kept AI fairness in the ethical frame and discouraged frames more costly to its bottom line, like the ethics of preventing disinformation.

Notes about “Data Science and the Decline of Liberal Law and Ethics”

Jake Goldenfein and I have put up on SSRN our paper, “Data Science and the Decline of Liberal Law and Ethics”. I’ve mentioned it on this blog before as something I’m excited about. It’s also been several months since we’ve finalized it, and I wanted to quickly jot some notes about it based on considerations going into it and since then.

The paper was the result of a long and engaged collaboration with Jake which started from a somewhat different place. We considered the question, “What is sociopolitical emancipation in the paradigm of control?” That was a mouthful, but it captured what we were going for:

  • Like a lot of people today, we are interested in the political project of freedom. Not just freedom in narrow, libertarian senses that have proven to be self-defeating, but in broader senses of removing social barriers and systems of oppression. We were ambivalent about the form that would take, but figured it was a positive project almost anybody would be on board with. We called this project emancipation.
  • Unlike a certain prominent brand of critique, we did not begin from an anthropological rejection of the realism of foundational mathematical theory from STEM and its application to human behavior. In this paper, we did not make the common move of suggesting that the source of our ethical problems is one that can be solved by insisting on the terminology or methodological assumptions of some other discipline. Rather, we took advances in, e.g., AI as real scientific accomplishments that are telling us how the world works. We called this scientific view of the world the paradigm of control, due to its roots in cybernetics.

I believe our work is making a significant contribution to the “ethics of data science” debate because it is quite rare to encounter work that is engaged with both project. It’s common to see STEM work with no serious moral commitments or valence. And it’s common to see the delegation of what we would call emancipatory work to anthropological and humanistic disciplines: the STS folks, the media studies people, even critical X (race, gender, etc.) studies. I’ve discussed the limitations of this approach, however well-intentioned, elsewhere. Often, these disciplines argue that the “unethical” aspect of STEM is because of their methods, discourses, etc. To analyze things in terms of their technical and economic properties is to lose the essence of ethics, which is aligned with anthropological methods that are grounded in respectful, phenomenological engagement with their subjects.

This division of labor between STEM and anthropology has, in my view (I won’t speak for Jake) made it impossible to discuss ethical problems that fit uneasily in either field. We tried to get at these. The ethical problem is instrumentality run amok because of the runaway economic incentives of private firms combined with their expanded cognitive powers as firms, a la Herbert Simon.

This is not a terribly original point and we hope it is not, ultimately, a fringe political position either. If Martin Wolf can write for the Financial Times that there is something threatening to democracy about “the shift towards the maximisation of shareholder value as the sole goal of companies and the associated tendency to reward management by reference to the price of stocks,” so can we, and without fear that we will be targeted in the next red scare.

So what we are trying to add is this: there is a cognitivist explanation for why firms can become so enormously powerful relative to individual “natural persons”, one that is entirely consistent with the STEM foundations that have become dominant in places like, most notably, UC Berkeley (for example) as “data science”. And, we want to point out, the consequences of that knowledge, which we take to be scientific, runs counter to the liberal paradigm of law and ethics. This paradigm, grounded in individual autonomy and privacy, is largely the paradigm animating anthropological ethics! So we are, a bit obliquely, explaining why the the data science ethics discourse has gelled in the ways that it has.

We are not satisfied with the current state of ‘data science ethics’ because to the extent that they cling to liberalism, we fear that they miss and even obscure the point, which can best be understood in a different paradigm.

We left as unfinished the hard work of figuring out what the new, alternative ethical paradigm that took cognitivism, statistics, and so on seriously would look like. There are many reasons beyond the conference publication page limit why we were unable to complete the project. The first of these is that, as I’ve been saying, it’s terribly hard to convince anybody that this is a project worth working on in the first place. Why? My view of this may be too cynical, but my explanations are that either (a) this is an interdisciplinary third rail because it upsets the balance of power between different academic departments, or (b) this is an ideological third rail because it successfully identifies a contradiction in the current sociotechnical order in a way that no individual is incentivized to recognize, because that order incentivizes individuals to disperse criticism of its core institutional logic of corporate agency, or (c) it is so hard for any individual to conceive of corporate cognition because of how it exceeds the capacity of human understanding that speaking in this way sounds utterly speculative to a lot of fo people. The problem is that it requires attributing cognitive and adaptive powers to social forms, and a successful science of social forms is, at best, in the somewhat gnostic domain of complex systems research.

The latter are rarely engaged in technology policy but I think it’s the frontier.

References

Benthall, Sebastian and Goldenfein, Jake, Data Science and the Decline of Liberal Law and Ethics (June 22, 2020). Ethics of Data Science Conference – Sydney 2020 (forthcoming). Available at SSRN: https://ssrn.com/abstract=

“Private Companies and Scholarly Infrastructure”

I’m proud to link to this blog post on the Cornell Tech Digital Life Initiative blog by Jake Goldenfein, Daniel Griffin, and Eran Toch, and myself.

The academic funding scandals plaguing 2019 have highlighted some of the more problematic dynamics between tech industry money and academia (see e.g. Williams 2019, Orlowski 2017). But the tech industry’s deeper impacts on academia and knowledge production actually stem from the entirely non-scandalous relationships between technology firms and academic institutions. Industry support heavily subsidizes academic work. That support comes in the form of direct funding for departments, centers, scholars, and events, but also through the provision of academic infrastructures like communications platforms, computational resources, and research tools. In light of the reality that infrastructures are themselves political, it is imperative to unpack the political dimensions of scholarly infrastructures provided by big technology firms, and question whether they might problematically impact knowledge production and the academic field more broadly.

Goldenfein, Benthall, Griffin, and Toch, “Private Companies and Scholarly Infrastructure – Google Scholar and Academic Autonomy”, 2019

Among other topics, the post is about how the reorientation of academia onto commercial platforms possibly threatens the autonomy that is a necessary condition of the objectivity of science (Bourdieu, 2004).

This is perhaps a cheeky argument. Questioning whether Big Tech companies have an undue influence on academic work is not a popular move because so much great academic work is funded by Big Tech companies.

On the other hand, calling into question the ethics of Big Tech companies is now so mainstream that it is actively debated in the Democratic 2020 primary by front-running candidates. So we are well within the Overton window here.

On a philosophical level (which is not the primary orientation of the joint work), I wonder how much these concerns are about the relationship between capitalist modes of production and ideology with academic scholarship in general, and how much this specific manifestation (Google Scholar’s becoming the site of a disciplinary collapse (Benthall, 2015) in scholarly metrics is significant. Like many contemporary problems in society and technology, the “problem” may be that a technical intervention that might have at one point seemed like a desirable intervention by challengers (in the Fligstein (1997) field theory sense) is now having the political impact that is questioned and resisted by incumbents. I.e., while there has always been a critique of the system, the system has changed and so the critique comes from a different social source.

References

Benthall, S. (2015). Designing networked publics for communicative action. Interface, 1(1), 3.

Bourdieu, Pierre. Science of science and reflexivity. Polity, 2004.

Fligstein, Neil. “Social skill and institutional theory.” American behavioral scientist 40.4 (1997): 397-405.

Orlowski, A. (2017). Academics “funded” by Google tend not to mention it in their work. The Register, 13 July 2017.

Williams, O. (2019). How Big Tech funds the debate on AI Ethics. New Statesman America, 6 June 2019 < https://www.newstatesman.com/science-tech/technology/2019/06/how-big-tech-funds-debate-ai-ethics>.

computational institutions

As the “AI ethics” debate metastasizes in my newsfeed and scholarly circles, I’m struck by the frustrations of technologists and ethicists who seem to be speaking past each other.

While these tensions play out along disciplinary fault-lines, for example, between technologists and science and technology studies (STS), the economic motivations are more often than not below the surface.

I believe this is to some extent a problem of the nomenclature, which is again the function of the disciplinary rifts involved.

Computer scientists work, generally speaking, on the design and analysis of computational systems. Many see their work as bounded by the demands of the portability and formalizability of technology (see Selbst et al., 2019). That’s their job.

This is endlessly unsatisfying to critics of the social impact of technology. STS scholars will insist on changing the subject to “sociotechnical systems”, a term that means something very general: the assemblage of people and artifacts that are not people. This, fairly, removes focus from the computational system and embeds it in a social environment.

A goal of this kind of work seems to be to hold computational systems, as they are deployed and used socially, accountable. It must be said that once this happens, we are no longer talking about the specialized domain of computer science per se. It is a wonder why STS scholars are so often picking fights with computer scientists, when their true beef seems to be with businesses that use and deploy technology.

The AI Now Institute has attempted to rebrand the problem by discussing “AI Systems” as, roughly, those sociotechnical systems that use AI. This is one the one hand more specific–AI is a particular kind of technology, and perhaps it has particular political consequences. But their analysis of AI systems quickly overflows into sweeping claims about “the technology industry”, and it’s clear that most of their recommendations have little to do with AI, and indeed are trying, once again, to change the subject from discussion of AI as a technology (a computer science research domain) to a broader set of social and political issues that do, in fact, have their own disciplines where they have been researched for years.

The problem, really, is not that any particular conversation is not happening, or is being excluded, or is being shut down. The problem is that the engineering focused conversation about AI-as-a-technology has grown very large and become an awkward synecdoche for the rise of major corporations like Google, Apple, Amazon, Facebook, and Netflix. As these corporations fund and motivate a lot of research, there’s a question of who is going to get pieces of the big pie of opportunity these companies represent, either in terms of research grants or impact due to regulation, education, etc.

But there are so many aspects of these corporations that are neither addressed by the terms “sociotechnical system”, which is just so broad, and “AI System”, which is as broad and rarely means what you’d think it does (that the system uses AI is incidental if not unnecessary; what matters is that it’s a company operating in a core social domain via primarily technological user interfaces). Neither of these gets at the unit of analysis that’s really of interest.

An alternative: “computational institution”. Computational, in the sense of computational cognitive science and computational social science: it denotes the essential role of theory of computation and statistics in explaining the behavior of the phenomenon being studied. “Institution”, in the sense of institutional economics: the unit is a firm, which is comprised of people, their equipment, and their economic relations, to their suppliers and customers. An economic lens would immediately bring into focus “the data heist” and the “role of machines” that Nissenbaum is concerned are being left to the side.

“Context, Causality, and Information Flow: Implications for Privacy Engineering, Security, and Data Economics” <– My dissertation

In the last two weeks, I’ve completed, presented, and filed my dissertation, and commenced as a doctor of philosophy. In a word, I’ve PhinisheD!

The title of my dissertation is attention-grabbing, inviting, provocative, and impressive:

“Context, Causality, and Information Flow: Implications for Privacy Engineering, Security, and Data Economics”

If you’re reading this, you are probably wondering, “How can I drop everything and start reading that hot dissertation right now?”

Look no further: here is a link to the PDF.

You can also check out this slide deck from my “defense”. It covers the highlights.

I’ll be blogging about this material as I break it out into more digestible forms over time. For now, I’m obviously honored by any interest anybody takes in this work and happy to answer questions about it.

Notes on Clark Kerr’s “The ‘City of Intellect’ in a Century for Foxes?”, in The Uses of the University 5th Edition

I am in my seventh and absolutely, definitely last year of a doctoral program and so have many questions about the future of higher education and whether or not I will be a part of it. For insight, I have procured an e-book copy of Clark Kerr’s The Uses of the University (5th Edition, 2001). Clark Kerr was the 20th President of University of California system and became famous among other things for his candid comments on university administration, which included such gems as

“I find that the three major administrative problems on a campus are sex for the students, athletics for the alumni and parking for the faculty.”

…and…

“One of the most distressing tasks of a university president is to pretend that the protest and outrage of each new generation of undergraduates is really fresh and meaningful. In fact, it is one of the most predictable controversies that we know. The participants go through a ritual of hackneyed complaints, almost as ancient as academe, while believing that what is said is radical and new.”

The Uses of the University is a collection of lectures on the topic of the university, most of which we given in the second half of the 20th century. The most recent edition contains a lecture given in the year 2000, after Kerr had retired from administration, but anticipating the future of the university in the 21st century. The title of the lecture is “The ‘City of Intellect’ in a Century for Foxes?”, and it is encouragingly candid and prescient.

To my surprise, Kerr approaches the lecture as a forecasting exercise. Intriguingly, Kerr employs the hedgehog/fox metaphor from Isaiah Berlin in a lecture about forecasting five years before the publication of Tetlock’s 2005 book Expert Political Judgment (review link), which used the fox/hedgehog distinction to cluster properties that were correlated with political expert’s predictive power. Kerr’s lecture is structured partly as the description of a series of future scenarios, reminiscent of scenario planning as a forecasting method. I didn’t expect any of this, and it goes to show perhaps how pervasive scenario thinking was as a 20th century rhetorical technique.

Kerr makes a number of warning about the university in the 20th century, especially with respect to the glory of the university in the 20th century. He makes a historical case for this: universities in the 20th century thrived on new universal access to students, federal investment in universities as the sites of basic research, and general economic prosperity. He doesn’t see these guaranteed in the 20th century, though he also makes the point that in official situations, the only thing a university president should do is discuss the past with pride and the future with apprehension. He has a rather detailed analysis of the incentives guiding this rhetorical strategy as part of the lecture, which makes you wonder how much salt to take the rest of the lecture with.

What are the warnings Kerr makes? Some are a continuation of the problems universities experienced in the 20th century. Military and industrial research funding changed the roles of universities away from liberal arts education into research shop. This was not a neutral process. Undergraduate education suffered, and in 1963 Kerr predicted that this slackening of the quality of undergraduate education would lead to student protests. He was half right; students instead turned their attention externally to politics. Under these conditions, there grew to be a great tension between the “internal justice” of a university that attempted to have equality among its faculty and the permeation of external forces that made more of the professiorate face outward. A period of attempted reforms throguh “participatory democracy” was “a flash in the pan”, resulting mainly in “the creation of courses celebrating ethnic, racial, and gender diversities. “This experience with academic reform illustrated how radical some professors can be when they look at the external world and how conservative when they look inwardly at themselves–a split personality”.

This turn to industrial and military funding and the shift of universities away from training in morality (theology), traditional professions (medicine, law), self-chosen intellectual interest for its own sake, and entrance into elite society towards training for the labor force (including business administration and computer science) is now quite old–at least 50 years. Among other things, Kerr predicts, this means that we will be feeling the effects of the hollowing out of the education system that happened as higher education deprioritized teaching in favor of research. The baby boomers who went through this era of vocational university education become, in Kerr’s analysis, an enormous class of retirees by 2030, putting new strain on the economy at large. Meanwhile, without naming computers and the Internet, Kerr acknowledged that the “electronic revolution” is the first major change to affect universities for three hundred years, and could radically alter their role in society. He speaks highly of Peter Drucker, who in 1997 was already calling the university “a failure” that would be made obsolete by long-distance learning.

In an intriguing comment on aging baby boomers, which Kerr discusses under the heading “The Methuselah Scenario”, is that the political contest between retirees and new workers will break down partly along racial lines: “Nasty warfare may take place between the old and the young, parents and children, retired Anglos and labor force minorities.” Almost twenty years later, this line makes me wonder how much current racial tensions are connected to age and aging. Have we seen the baby boomer retirees rise as a political class to vigorously defend the welfare state from plutocratic sabotage? Will we?

Kerr discusses the scenario of the ‘disintegration of the integrated university’. The old model of medicine, agriculture, and law integrated into one system is coming apart as external forces become controlling factors within the university. Kerr sees this in part as a source of ethical crises for universities.

“Integration into the external world inevitably leads to disintegration of the university internally. What are perceived by some as the injustices in the external labor market penetrate the system of economic rewards on campus, replacing policies of internal justice. Commitments to external interests lead to internal conflicts over the impartiality of the search for truth. Ideologies conflict. Friendships and loyalties flow increasingly outward. Spouses, who once held the academic community together as a social unit, now have their own jobs. “Alma Mater Dear” to whom we “sing a joyful chorus” becomes an almost laughable idea.”

A factor in this disintegration is globalization, which Kerr identifies with the mobility of those professors who are most able to get external funding. These professors have increased bargaining power and can use “the banner of departmental autonomy” to fight among themselves for industrial contracts. Without oversight mechanisms, “the university is helpless in the face of the combined onslaught of aggressive industry and entrepreneurial faculty members”.

Perhaps most fascinating for me, because it resonates with some of my more esoteric passions, is Kerr’s section on “The fractionalization of the academic guild“. Subject matter interest breaks knowledge into tiny disconnected topics–"Once upon a time, the entire academic enterprise originated in and remained connected to philosophy." The tension between "internal justice" and the "injustices of the external labor market" creates a conflict over monetary rewards. Poignantly, "fractionalization also increases over differing convictions about social justice, over whether it should be defined as equality of opportunity or equality of results, the latter often taking the form of equality of representation. This may turn out to be the penultimate ideological battle on campus."

And then:

The ultimate conflict may occur over models of the university itself, whether to support the traditional or the “postmodern” model. The traditional model is based on the enlightenment of the eighteenth century–rationality, scientific processes of thought, the search for truth, objectivity, “knowledge for its own sake and for its practical applications.” And the traditional university, to quote the Berkeley philosopher John Searle, “attempts to be apolitical or at least politically neutral.” The university of postmodernism thinks that all discourse is political anyway, and it seeks to use the university for beneficial rather than repressive political ends… The postmodernists are attempting to challenge certain assumptions about the nature of truth, objectivity, rationality, reality, and intellectual quality.”

… Any further politicization of the university will, of course, alienate much of the public at large. While most acknowledge that the traditional university was partially politicized already, postmodernism will further raise questions of whether the critical function of the university is based on political orientation rather than on nonpolitical scientific analysis.”

I could go on endlessly about this topic; I’ll try to be brief. First, as per Lyotard’s early analysis of the term, postmodernism is as much as result of the permeation of the university by industrial interests as anything else. Second, we are seeing, right now today in Congress and on the news etc., the eroded trust that a large portion of the public has of university “expertise”, as they assume (having perhaps internalized a reductivist version of the postmodern message despite or maybe because they were being taught by teaching assistants instead of professors) that the professoriate is politically biased. And now the students are in revolt over Free Speech again as a result.

Kerr entertains for a paragraph the possibility of a Hobbesian doomsday free-for-all over the university before considering more mundane possibilities such as a continuation of the status quo. Adapting to new telecommunications (including “virtual universities”), new amazing discoveries in biological sciences, and higher education as a step in mid-career advancement are all in Kerr’s more pragmatic view of the future. The permeability of the university can bring good as well as bad as it is influenced by traffic back and forth across its borders. “The drawbridge is now down. Who and what shall cross over it?”

Kerr counts three major wildcards determining the future of the university. The first is overall economic productivity, the second is fluctuations in returns to a higher education. The third is the United States’ role in the global economy “as other nations or unions of nations (for example, the EU) may catch up with and even surpass it. The quality of education and training for all citizens will be to this contest. The American university may no longer be supreme.” Fourth, student unrest turning universities into the “independent critic”. And fifth, the battles within the professoriate, “over academic merit versus social justice in treatment of students, over internal justice in the professional reward system versus the pressures of external markets, over the better model for the university–modern or post-modern.”

He concludes with three wishes for the open-minded, cunning, savvy administrator of the future, the “fox”:

  1. Careful study of new information technologies and their role.
  2. “An open, in-depth debate…between the proponents of the traditional and the postmodern university instead of the sniper shots of guerilla warfare…”
  3. An “in-depth discussion…about the ethical systems of the future university”. “Now the ethical problems are found more in the flow of contacts between the academic and the external worlds. There have never been so many ethical problems swirling about as today.”

Contextual Integrity as a field

There was a nice small gathering of nearby researchers (and one important call-in) working on Contextual Integrity at Princeton’s CITP today. It was a nice opportunity to share what we’ve been working on and make plans for the future.

There was a really nice range of different contributions: systems engineering for privacy policy enforcement, empirical survey work testing contextualized privacy expectations, a proposal for a participatory design approach to identifying privacy norms in marginalized communities, a qualitative study on how children understand privacy, and an analysis of the privacy implications of the Cybersecurity Information Sharing Act, among other work.

What was great is that everybody was on the same page about what we were after: getting a better understanding of what privacy really is, so that we can design between policies, educational tools, and technologies that preserve it. For one reason or another, the people in the room had been attracted to Contextual Integrity. Many of us have reservations about the theory in one way or another, but we all see its value and potential.

One note of consensus was that we should try to organize a workshop dedicated specifically to Contextual Integrity, and widening what we accomplished today to bring in more researchers. Today’s meeting was a convenience sample, leaving out a lot of important perspectives.

Another interesting thing that happened today was a general acknowledgment that Contextual Integrity is not a static framework. As a theory, it is subject to change as scholars critique and contribute to it through their empirical and theoretical work. A few of us are excited about the possibility of a Contextual Integrity 2.0, extending the original theory to fill theoretical gaps that have been identified in it.

I’d articulate the aspiration of the meeting today as being about letting Contextual Integrity grow from being a framework into a field–a community of people working together to cultivate something, in this case, a kind of knowledge.

education and intelligibility

I’ve put my finger on the problem I’ve had with scholarly discourse about intelligibility over the years.

It is so simple, really.

Sometimes, some group of scholars, A, will argue that the work of another group of scholars, B, is unintelligible. Because it is unintelligible, it should not be trusted. Rather, it has to be held accountable to the scholars in A.

Typically, the scholars in B are engaged in some technical science, while the scholars in A are writers.

Scholars in B meanwhile say: well, if you want to understand what we do, then you could always take some courses in it. Here (in the modern day): we’ve made an on-line course which you can take if you want to understand what we do.

The existence of the on-line course or whatever other resources expressing the knowledge of B tend to not impress those in A. If A is persistent, they will come up with reasons why these resources are insufficient, or why there are barriers to people in A making proper use of those resources.

But ultimately, what A is doing is demanding that B make itself understood. What B is offering is education. And though some people are averse to the idea that some things are just inherently hard to understand, this is a minority opinion that is rarely held by, for example, those who have undergone arduous training in B.

Generally speaking, if everybody were educated in B, then there wouldn’t be so much of a reason for demanding its intelligibility. Education, not intelligibility, seems to be the social outcome we would really like here. Naturally, only people in B will really understand how to educate others in B; this leaves those in A with little to say except to demand, as a stopgap, intelligibility.

But what if the only way for A to truly understand B is for A to be educated by B? Or to educate itself in something essentially equivalent to B?

Reason returns to Berkeley

I’ve been struck recently by a subtle shift in messaging at UC Berkeley since Carol T. Christ has become the university’s Chancellor. Incidentally, she is the first woman chancellor of the university, with a research background in Victorian literature. I think both of these things may have something to do with the bold choice she’s made in recent announcements: the inclusion of reason as among the University’s core values.

Notably, the word has made its appearance next to three other terms that have had much more prominence in the university in recent years: equity, inclusion, and diversity. For example, in the following statements:

In “Thoughts on Charlottesville”:

We must now come together to oppose what are dangerous threats to the values we hold dear as a democracy and as a nation. Our shared belief in reason, diversity, equity, and inclusion is what animates and supports our campus community and the University’s academic mission. Now, more than ever, those values are under assault; together we must rise to their defense.

And, strikingly, this message on “Free Speech”:

Nonetheless, defending the right of free speech for those whose ideas we find offensive is not easy. It often conflicts with the values we hold as a community—tolerance, inclusion, reason and diversity. Some constitutionally-protected speech attacks the very identity of particular groups of individuals in ways that are deeply hurtful. However, the right response is not the heckler’s veto, or what some call platform denial. Call toxic speech out for what it is, don’t shout it down, for in shouting it down, you collude in the narrative that universities are not open to all speech. Respond to hate speech with more speech.

The above paragraph comes soon after this one, in which Chancellor Christ defends Free Speech on Millian philosophical grounds:

The philosophical justification underlying free speech, most powerfully articulated by John Stuart Mill in his book On Liberty, rests on two basic assumptions. The first is that truth is of such power that it will always ultimately prevail; any abridgement of argument therefore compromises the opportunity of exchanging error for truth. The second is an extreme skepticism about the right of any authority to determine which opinions are noxious or abhorrent. Once you embark on the path to censorship, you make your own speech vulnerable to it.

This slight change in messaging strikes me as fundamentally wise. In the past year, the university has been wracked by extreme passions and conflicting interests, resulting in bad press externally and I imagine discomfort internally. But this was not unprecedented; the national political bifurcation could take hold at Berkeley precisely because it had for years been, with every noble intention, emphasizing inclusivity and equity without elevating a binding agent that makes diversity meaningful and productive. This was partly due to the influence of late 20th century intellectual trends that burdened “reason” with the historical legacy of those regimes that upheld it as a virtue, which tended to be white and male. There was a time when “reason” was so associated with these powers that the term was used for the purposes of exclusion–i.e. with the claim that new entrants to political and intellectual power were being “unreasonable”.

Times have changed precisely because the exclusionary use of “reason” was a corrupt one; reason in its true sense is impersonal and transcends individual situation even as it is immanent in it. This meaning of reason would be familiar to one steeped in an older literature.

Carol Christ’s wording reflects a 21st century theme which to me gives me profound confidence in Berkeley’s future: the recognition that reason does not oppose inclusion, but rather demands it, just as scientific logic demands properly sampled data. Perhaps the new zeitgeist at Berkeley has something to do with the new Data Science undergraduate curriculum. Given the state of the world, I’m proud to see reason make a comeback.

Differing ethnographic accounts of the effectiveness of technology

I’m curious as I compare two recent papers, one by Christin [2017] and one by Levy [2015], both about the role of technology in society. and backed by ethnographic data.

What interests me is that the two papers both examine the use of algorithms in practice, but they differ in their account of the effectiveness of the algorithms used. Christin emphasizes the way web journalists and legal professionals deliberately undermine the impact of algorithms. Levy discusses how electronic monitoring achieves central organizational control over truckers.

I’m interested in the different framings because, as Christin points out, a central point of contention in the critical scholarship around data and algorithms is the effectiveness of the technology, especially “in practice”. Implicitly if not explicitly, if the technology is not as effective as its advocates say it is, then it is overhyped and this debunking is an accomplishment of the critical and often ethnographic field.

On the other hand, if the technology is effective at control, as Levy’s article argues that it is, then it poses a much more real managerialist threat to worker’s autonomy. Identifying that this is occurring is also a serious accomplishment of the ethnographic field.

What must be recognized, however, is that these two positions contradict each other, at least as general perspectives on data-collection and algorithmic decision-making. The use of a particular technology in a particular place cannot be both so ineffective as to be overhyped and so effective as to constitute a managerialist threat. The substance of the two critiques is at odds with each other, and they call for different pragmatic responses. The former suggests a rhetorical strategy of further debunking, the latter demands a material strategy of changing working conditions.

I have seen both strategies used in critical scholarship, sometimes even in the same article, chapter, or book. I have never seen critical scholars attempt to resolve this difference between themselves using their shared assumptions and methods. I’d like to see more resolution in the ethnographic field on this point.

Correction, 8/10/17:

The apparent tension is resolved on a closer reading of Christin (2017). The argument there is that technology (in the managerialist use common to both papers) is ineffective when its intended use is resisted by those being managed by it.

That shifts the ethnographic challenge to technology away from an attack on the technical quality of the work (which is a non-starter) to accomplish what it is designed to do, but rather on the uncontroversial proposition that the effectiveness of technology depends in part on assumptions on how it will be used, and that these assumptions can be violated.

The political question of to what extent these new technologies should be adopted can then be addressed straightforwardly in terms of whether or not it is fully and properly adopted, or only partially and improperly adopted. Using language like this would be helpful in bridging technical and ethnographic fields.

References

Christin, 2017. “Algorithms in practice: Comparing journalism and criminal justice.” (link)

Levy, 2015. “The Contexts of Control: Information, Power, and Truck-Driving Work.” (link)