Reflections on the Gebru/Google dismissal
I’ve decided to write up some reactions to the dismissal of Dr. Gebru from Google’s Ethical AI team. I have hesitated thus far because the issues revolve around a particular person who has a right to privacy, because of the possible professional consequences of speaking out (this research area is part of my professional field and the parties involved, including Google’s research team and Dr. Gebru, are all of greater stature in it than myself), because there is much I don’t know about the matter (I have no inside view of the situation at Google, for example), because the facts of the case look quite messy to me, with many different issues at stake, and because the ethical issues raised by the case are substantive and difficult. It has also been a time with pressing personal responsibilities and much needed holiday rest.
I’m also very aware that one framing of the event is that it is about diversity and representation within the AI ethics research community. There are some that believe that white etc. men are over-represented in the field. Implicitly, if I write publicly about this situation, representing, as it were, myself, I am part of that problem. More on that in a bit.
Despite all of these reasons, I think it is best to write something. The event has been covered by many mainstream news outlets, and Dr. Gebru has been about as public as is possible with her take on the situation. She is, I believe, a public figure in this respect. I’ve written before on related topics and controversies within this field and have sometimes been told by others that they have found my writing helpful. As for the personal consequences to myself, I try to hold myself to a high standard of courage in my research work and writing. I wouldn’t be part of this technology ethics field if I did not.
So what do I think?
First, I think there has been a lot of thoughtful coverage of the incident by others. Here are some links to that work. So far, Hanna and Whitaker‘s take is the most forceful in its analysis of the meaning of the incident for a “crisis in AI”. In their analysis:
- There is a crisis, which involves:
- A mismatch between those benefiting from and creating AI — “the corporations and the primarily white male researchers and developers” — and those most likely to be harmed by AI — “BIPOC people, women, religious and gender minorities, and the poor” because of “structural barriers”. A more diverse research community is needed to “[center] the perspectives and experiences of those who bear the harms of these technologies.”
- The close ties between tech companies and ostensibly independent academic institutions that homogenize the research community, obscure incentives, and dull what might be a more critical research agenda.
- To address this crisis:
- Tech workers should form an inclusive union that pushes back on Big Tech for ethical concerns.
- Funding for independent critical research, with greater guaranteed access to company resources, should be raised through a tax on Big Tech.
- Further regulations should be passed to protect whistleblowers, prevent discrimination, consumer privacy and the contestability of AI systems.
These lines of argument capture most of what I’ve seen more informally in Twitter conversations about this issue. As far as their practical recommendations go, I think a regulatory agency for Big Tech, analogous to the Securities Exchange Commission for the financial sector, with a federal research agency analogous to the Office of Financial Research, is the right way to go on this. I’m more skeptical about the idea of a tech workers union, but that this not the main focus of this post. This post is about Dr. Gebru’s dismissal and its implications.
I think it’s best if I respond to the situation we a series of questions.
First, was Dr. Gebru wrongfully terminated from Google? Wrongful termination is when an employer terminates a contract with an employee in retaliation for an anti-discrimination or whistleblowing action. The heart of the matter is that Dr. Gebru’s dismissal “smells like” wrongful termination: Dr. Gebru was challenging Google’s diversity programs internally; she was reporting environmental costs of AI in her research in a way that was perhaps like whistleblowing. The story is complicated by the fact that she was negotiating with Google, with the possibility of resignation as leverage, when she was terminated.
I’m not a lawyer. I have come to appreciate the importance of the legal system rather late in my research career. Part of that appreciation is of how the law has largely anticipated the ethical issues raised by “AI” already. I am surprised, however, that the phrase “wrongful termination” has not been raised in journalism covering Dr. Gebru’s dismissal. It seems like the closest legal analog. Could, say, a progressively orientated academic legal clinic help Dr. Gebru sue Google over this? Does she have a case?
These are not idle questions. If the case is to inform better legal protection of corporate AI researchers and other ‘tech workers’, then it is important to understand the limits of current wrongful termination law, whether these limits cover the case of Dr. Gebru’s dismissal, and if not, what expansions to this law would be necessary to cover it.
Second, what is corporate research (and corporate funded research) for? The field of “Ethical AI” has attracted people with moral courage and conviction who probably could be doing other things if they did not care so much. Many people enter academic research hoping that they can somehow, through their work, make the world a better place. The ideal of academic freedom is that it allows researchers to be true to their intellectual commitments, including their ethical commitments. It is probably true that “critical” scholarship survives better in the academic environment. But what is corporate research for? Should we reasonably expect a corporation’s research arm to challenge that corporation’s own agendas?
I’ve done corporate research. My priorities were pretty clear in that context: I was supposed to make the company I worked for look smart. I was supposed to develop new technical prototypes that could be rolled into products. I was supposed to do hard data wrangling and analysis work to suss out what kinds of features would be possible to build. My research could make the world a better place, but my responsibility to my employer was to make it a better place by improving our company’s offerings.
I’ve also done critical work. Critical work tends not to pay as well as corporate research, for obvious reasons. I’m mainly done this from academic positions, or as a concerned citizen writing on my own time. It is striking that Hanna and Whitaker’s analysis follows through to the conclusion that critical researchers want to get paid. Their rationale is that society should reinvest the profits of Big Tech companies into independent research that focuses on reducing Big Tech harms. This would be like levying a tax on Big Tobacco to fund independent research into the health effects of smoking. This really does sound like a good idea to me.
But this idea would sound good to me even without Dr. Gebru’s dismissal from Google. To conflate the two issues muddies the water for me. There is one other salient detail: some of the work that brought Dr. Gebru into research stardom was now well-known audits of facial recognition technology developed by IBM and Microsoft. Google happily hired her. I wonder if Google would have minded if Dr. Gebru continued to do critical audits of Microsoft and IBM from her Google position. I expect Google would have been totally fine with this: one purpose of corporate research could be digging up dirt on your competition! This implies that it’s not entirely true that you can’t do good critical work from a corporate job. Maybe this kind of opposition research should be encouraged and protected (by making Big Tech collusion to prevent such research illegal).
Third, what is the methodology of AI ethics research? There are two schools of thought in research. There’s the school of thought that what’s most important about research is the concrete research question and that any method that answers the research question will do. Then there’s the school of thought that says what’s most important about research is the integrity of research methods and institutions. I’m of the latter school of thought, myself.
One thing that is notable about top-tier AI ethics research today is the enormously broad interdisciplinary range of its publication venues. I would argue that this interdisciplinarity is not intellectually coherent but rather reflects the broad range of disciplinary and political interests that have been able to rally around the wholly ambiguous idea of “AI ethics”. It doesn’t help that key terms within the field, such as “AI” and “algorithm”, are distorted to fit whatever agenda researchers want for them. The result is a discursive soup which lacks organizing logic.
In such a confused field, it’s not clear what conditions research needs to meet in order to be “good”. In practice, this means that the main quality control and/or gatekeeping mechanism, the publishing conferences, operate through an almost anarchic process of peer review. Adjacent to this review process is the “disciplinary collapse” of social media, op-eds, and whitepapers, which serve various purposes of self-promotion, activism/advocacy, and marketing. There is little in this process to incentivize the publication of work that is correct, or to set the standards of what that would be.
This puts AI ethics researchers in a confusing position. Google, for example, can plausible set its own internal standards for research quality because the publication venues have not firmly set their own. Was Dr. Gebru’s controversial paper up to Google’s own internal publication standards, as Google has alleged? Or did they not want their name on it only because it made them look bad? I honestly don’t know. But even though I have written quite critically about corporate AI “ethics” approaches before, I actually would not be surprised if a primarily “critical” researcher did not do a solid literature review of the engineering literature on AI energy costs before writing a piece about it, because the epistemic standards of critical scholarship and engineering are quite different.
There has been a standard floated implicitly or explicitly by some researchers in the AI ethics space. I see Hanna and Whitaker as aligned with this standard and will borrow their articulation. In this view, the purpose of AI ethics research is to surface the harms of AI so that they may be addressed. The reason why these harms are not obvious to AI practitioners already is the lack of independent critical scholarship by women, BIPOC, the poor, and other minorities. Good AI ethics work is therefore work done by these minorities such that it expresses their perspective, critically revealing faults in AI systems.
Personally, I have a lot of trouble with this epistemic standard. According to it, I really should not be trying to work on AI ethics research. I am simply, by fault of my subject position, unable to do good work. Dr. Gebru, a Black woman, on the other hand, will always do good work according to this standard.
I want to be clear that I have some of Dr. Gebru’s work and believe it deserves all of its accolades for reasons that are not conditional on her being a Black woman. I also understand why her subject position has primed her to do the kind of work that she has done; she is a trailblazer because of who she is. But if the problem faced by the AI ethics community is that its institutions have blended corporate and academic research interests so much that the incentives are obscure and the playing field benefits the corporations, who have access to greater resources and so on, then this problem will not be solved by allowing corporations to publish whatever they want as long as the authors are minorities. This would be falling into the trap of what Nancy Fraser calls progressive neoliberalism, which incentivizes corporate tokenization of minorities. (I’ve written about this before.)
Rather, the way to level the playing field between corporate research and independent or academic research is to raise the epistemic standard of the publication venues in a way that supports independent or academic research. Hanna and Whitaker argue that “[r]esearchers outside of corporate environments must be guaranteed greater access to technologies currently hidden behind claims of corporate secrecy, such as access to training data sets, and policies and procedures related to data annotation and content moderation.” Nobody, realistically, is going to guarantee outside researchers access to corporate secrets. However, research publication venues (like conferences) can change their standards to mandate open science practices: access to training data sets, reproducibility of results, no dependence on corporate secrets, and so on.
A tougher question for AI ethics research in particular is the question of how to raise epistemic standards for normative research in a way that doesn’t beg the question on interpretations of social justice or devolve into agonistic fracturing on demographic grounds. There are of course academic disciplines with robust methods for normative work; they are not always in communication with each other. I don’t think there’s going to be much progress in the AI ethics field until a sufficient synthesis of feminist epistemology and STEM methods has been worked out. I fear that is not going to happen quickly because it would require dropping some of what’s dear to situated epistemologies of the progressive AI ethics wing. But I may be wrong there. (There was some work along these lines by methodologists some years ago under the label “Human-Centered Data Science”.)
Lastly, whatever happened to the problem of energy costs of AI, and climate change? To me, what was perhaps most striking about the controversial paper at the heart of Dr. Gebru’s dismissal was that it wasn’t primarily about representation of minorities. Rather, it was (I’ve heard–I haven’t read the paper yet) about energy costs of AI, which is something that, yes, even white men can be concerned about. If I were to give my own very ungenerous, presumptuous, and truly uninformed interpretation of what the goings-on at Google were all about, I would put it this way: Google hired Dr. Gebru to do progressive hit pieces on competitor’s AI products like she had done for Microsoft and IBM, and to keep the AI ethics conversation firmly in the territory of AI biases. Google has the resources to adjust its models to reduce these harms, get ahead of AI fairness regulation, and compete on wokeness to the woke market segments. But Dr. Gebru’s most recent paper reframes the AI ethics debate in terms of a universal problem of climate change which has a much broader constituency, and which is actually much closer to Google’s bottom line. Dr. Gebru has the star power to make this story go mainstream, but Google wants to carve out its own narrative here.
It will be too bad if the fallout of Dr. Gebru’s dismissal is a reversion of the AI ethics conversation back to the well-trod questions of researcher diversity, worker protection, and privacy regulation, when the energy cost and climate change questions provide a much broader base of interest from which to refine and consolidate the AI ethics community. Maybe we should be asking: what standards should conferences be holding researchers to when they make claims about AI energy costs? What are the standards of normative argumentation for questions of carbon emission, which necessarily transcend individual perspectives, while of course also impacting different populations disparately? These are questions everybody should care about.
EDIT: I’m sensitive to the point that suggesting that Big Tech’s shift of the frame of AI ethics towards ‘fairness’ in a socially progressive sense is somewhat disingenuous may be seen as a rejection of those progressive politics, especially in the absence of evidence. I don’t reject those politics. This journalistic article by Karen Hao provides some evidence as to how another Big Tech company, Facebook, has deliberately kept AI fairness in the ethical frame and discouraged frames more costly to its bottom line, like the ethics of preventing disinformation.