Life update: new AI job

I started working at a new job this month. It is at an Artificial Intelligence startup. I go to an office, use GitHub and Slack, and write software, manipulate data, and manage cloud computing instances for a living. As at this point I am relatively senior as an employee, I’m also involved in meetings of a managerial nature. There are lots of questions about how we organize ourselves and how we interact with other companies that I get to weigh in on.

This is a change from being primarily a postdoctoral researcher or graduate student. That change is apparent even though during my time as a latter I was doing similar industrial work on a part-time basis. Now, at the startup, the purpose of my work is more clearly oriented towards our company’s success.

There is something very natural about this environment for me. It is normal. I am struck by this normality because I have for years been interacting with academics who claim to be studying the very thing that I’m now doing.

I have written a fair bit here about “AI Ethics”. Much of this has been written with frustration at the way the topic is “studied”. In retrospect, a great deal of “AI Ethics” literature is about how people (the authors) don’t like the direction “the conversation” is going. My somewhat glib attitude towards it is that the problem is that most people talking about “AI Ethics” don’t know what they are talking about, and don’t feel like they have to know what they are talking about to have a good point of view on the subject. “AI Ethics” is often an expression of the point of view that while those that are “doing” AI are being somehow inscrutable and maybe dangerous, they should be tamed into accountability towards those who are not doing it, and therefore don’t really know about it. In other words, AI Ethics, as a field, is a way of articulating the interest of one class of people with one relationship to capital to another class of people with a different relationship to capital.

Perhaps I am getting ahead of myself. Artificial Intelligence is capital. I mean that in an economic sense. The very conceit that it is possible to join an “AI Startup”, whose purpose is to build an AI and thereby increase the productivity of its workers and its value to shareholders, makes the conclusion–“AI is capital”–a tautological one. Somehow, this insight rarely makes it into the “AI Ethics” literature.

I have not “left academia” entirely. I have some academic projects that I’m working on. One of these, in collaboration with Bruce Haynes, is a Bourdieusian take on Contextual Integrity. I’m glad to be able to do this kind of work.

However, one source of struggle for me in maintaining an academic voice in my new role, aside from the primary and daunting one of time management, is that many of the insights I would bring to bear on the discussion are drawn from experience. The irony of a training in qualitative and “ethnographic” research into use of technology, with all of its questions of how to provide an emic account based on the testimony of informants, is that I am now acutely aware of how my ability to communicate is limited, transforming me from a “subject” of observation into, in some sense, an “object”.

I enjoy and respect my new job and role. I appreciate that, being a real company trying to accomplish something and not a straw man used to drive a scholarly conversation, “AI” means in our context a wide array of techniques–NLP, convex optimization, simulation, to name a few–smartly deployed in order to best complement the human labor that’s driving things forward. We are not just slapping a linear regression on a problem and calling it “AI”.

I also appreciate, having done work on privacy for a few years, that we are not handling personal data. We are using AI technologies to solve problems that aren’t about individuals. A whole host of “AI Ethics” issues which have grown to, in some corners, change the very meaning of “AI” into something inherently nefarious, are irrelevant to the business I’m now a part of.

Those are the “Pros”. If there were any “Cons”, I wouldn’t be able to tell you about them. I am now contractually obliged not to. I expect this will cut down on my “critical” writing some, which to be honest I don’t miss. That this is part of my contract is, I believe, totally normal, though I’ve often worked in abnormal environments without this obligation.

Joining a startup has made me think hard about what it means to be part of a private organization, as opposed to a public one. Ironically, this public/private institutional divide rarely makes its way into academic conversations about personal privacy and the public sphere. That’s because, I’ll wager, academic conversations themselves are always in a sense public. The question motivating that discourse is “How do we, as a public, deal with privacy?”.

Working at a private organization, the institutional analogue of privacy is paramount. Our company’s DNA is its intellectual property. Our company’s face is its reputation. The spectrum of individual human interests and the complexity of their ordering has its analogs in the domain of larger sociotechnical organisms: corporations and the like.

Paradoxically, there is no way to capture these organizational dynamics through “thick description”. It is also difficult to capture them through scientific modes of visualization. Indeed, one economic reason to form an AI startup is to build computational tools for understanding the nature of private ordering among institutions. These tools allow for comprehension of a phenomenon that cannot be easily reduced to the modalities of sight or speech.

I’m very pleased to be working in this new way. It is in many ways a more honest line of work than academia has been for me. I am allowed now to use my full existence as a knowing subject: to treat technology as an instrument for understanding, to communicate not just in writing but through action. It is also quieter work.