Digifesto

Tag: private sector

Life update: new AI job

I started working at a new job this month. It is at an Artificial Intelligence startup. I go to an office, use GitHub and Slack, and write software, manipulate data, and manage cloud computing instances for a living. As at this point I am relatively senior as an employee, I’m also involved in meetings of a managerial nature. There are lots of questions about how we organize ourselves and how we interact with other companies that I get to weigh in on.

This is a change from being primarily a postdoctoral researcher or graduate student. That change is apparent even though during my time as a latter I was doing similar industrial work on a part-time basis. Now, at the startup, the purpose of my work is more clearly oriented towards our company’s success.

There is something very natural about this environment for me. It is normal. I am struck by this normality because I have for years been interacting with academics who claim to be studying the very thing that I’m now doing.

I have written a fair bit here about “AI Ethics”. Much of this has been written with frustration at the way the topic is “studied”. In retrospect, a great deal of “AI Ethics” literature is about how people (the authors) don’t like the direction “the conversation” is going. My somewhat glib attitude towards it is that the problem is that most people talking about “AI Ethics” don’t know what they are talking about, and don’t feel like they have to know what they are talking about to have a good point of view on the subject. “AI Ethics” is often an expression of the point of view that while those that are “doing” AI are being somehow inscrutable and maybe dangerous, they should be tamed into accountability towards those who are not doing it, and therefore don’t really know about it. In other words, AI Ethics, as a field, is a way of articulating the interest of one class of people with one relationship to capital to another class of people with a different relationship to capital.

Perhaps I am getting ahead of myself. Artificial Intelligence is capital. I mean that in an economic sense. The very conceit that it is possible to join an “AI Startup”, whose purpose is to build an AI and thereby increase the productivity of its workers and its value to shareholders, makes the conclusion–“AI is capital”–a tautological one. Somehow, this insight rarely makes it into the “AI Ethics” literature.

I have not “left academia” entirely. I have some academic projects that I’m working on. One of these, in collaboration with Bruce Haynes, is a Bourdieusian take on Contextual Integrity. I’m glad to be able to do this kind of work.

However, one source of struggle for me in maintaining an academic voice in my new role, aside from the primary and daunting one of time management, is that many of the insights I would bring to bear on the discussion are drawn from experience. The irony of a training in qualitative and “ethnographic” research into use of technology, with all of its questions of how to provide an emic account based on the testimony of informants, is that I am now acutely aware of how my ability to communicate is limited, transforming me from a “subject” of observation into, in some sense, an “object”.

I enjoy and respect my new job and role. I appreciate that, being a real company trying to accomplish something and not a straw man used to drive a scholarly conversation, “AI” means in our context a wide array of techniques–NLP, convex optimization, simulation, to name a few–smartly deployed in order to best complement the human labor that’s driving things forward. We are not just slapping a linear regression on a problem and calling it “AI”.

I also appreciate, having done work on privacy for a few years, that we are not handling personal data. We are using AI technologies to solve problems that aren’t about individuals. A whole host of “AI Ethics” issues which have grown to, in some corners, change the very meaning of “AI” into something inherently nefarious, are irrelevant to the business I’m now a part of.

Those are the “Pros”. If there were any “Cons”, I wouldn’t be able to tell you about them. I am now contractually obliged not to. I expect this will cut down on my “critical” writing some, which to be honest I don’t miss. That this is part of my contract is, I believe, totally normal, though I’ve often worked in abnormal environments without this obligation.

Joining a startup has made me think hard about what it means to be part of a private organization, as opposed to a public one. Ironically, this public/private institutional divide rarely makes its way into academic conversations about personal privacy and the public sphere. That’s because, I’ll wager, academic conversations themselves are always in a sense public. The question motivating that discourse is “How do we, as a public, deal with privacy?”.

Working at a private organization, the institutional analogue of privacy is paramount. Our company’s DNA is its intellectual property. Our company’s face is its reputation. The spectrum of individual human interests and the complexity of their ordering has its analogs in the domain of larger sociotechnical organisms: corporations and the like.

Paradoxically, there is no way to capture these organizational dynamics through “thick description”. It is also difficult to capture them through scientific modes of visualization. Indeed, one economic reason to form an AI startup is to build computational tools for understanding the nature of private ordering among institutions. These tools allow for comprehension of a phenomenon that cannot be easily reduced to the modalities of sight or speech.

I’m very pleased to be working in this new way. It is in many ways a more honest line of work than academia has been for me. I am allowed now to use my full existence as a knowing subject: to treat technology as an instrument for understanding, to communicate not just in writing but through action. It is also quieter work.

social structure and the private sector

The Human Cell

Academic social scientists leaning towards the public intellectual end of the spectrum love to talk about social norms.

This is perhaps motivated by the fact that these intellectual figures are prominent in the public sphere. The public sphere is where these norms are supposed to solidify, and these intellectuals would like to emphasize their own importance.

I don’t exclude myself from this category of persons. A lot of my work has been about social norms and technology design (Benthall, 2014; Benthall, Gürses and Nissenbaum, 2017)

But I also work in the private sector, and it’s striking how differently things look from that perspective. It’s natural for academics who participate more in the public sphere than the private sector to be biased in their view of social structure. From the perspective of being able to accurately understand what’s going on, you have to think about both at once.

That’s challenging for a lot of reasons, one of which is that the private sector is a lot less transparent than the public sphere. In general the internals of actors in the private sector are not open to the scrutiny of commentariat onlookers. Information is one of the many resources traded in pairwise interactions; when it is divulged, it is divulged strategically, introducing bias. So it’s hard to get a general picture of the private sector, even though accounts for a much larger proportion of the social structure that’s available than the public sphere. In other words, public spheres are highly over-represented in analysis of social structure due to the available of public data about them. That is worrisome from an analytic perspective.

It’s well worth making the point that the public/private dichotomy is problematic. Contextual integrity theory (Nissenbaum, 2009) argues that modern society is differentiated among many distinct spheres, each bound by its own social norms. Nissenbaum actually has a quite different notion of norm formation from, say, Habermas. For Nissenbaum, norms evolve over social history, but may be implicit. Contrast this with Habermas’s view that norms are the result of communicative rationality, which is an explicit and linguistically mediated process. The public sphere is a big deal for Habermas. Nissenbaum, a scholar of privacy, reject’s the idea of the ‘public sphere’ simpliciter. Rather, social spheres self-regulate and privacy, which she defines as appropriate information flow, is maintained when information flows according to these multiple self-regulatory regimes.

I believe Nissenbaum is correct on this point of societal differentiation and norm formation. This nuanced understanding of privacy as the differentiated management of information flow challenges any simplistic notion of the public sphere. Does it challenge a simplistic notion of the private sector?

Naturally, the private sector doesn’t exist in a vacuum. In the modern economy, companies are accountable to the law, especially contract law. They have to pay their taxes. They have to deal with public relations and are regulated as to how they manage information flows internally. Employees can sue their employers, etc. So just as the ‘public sphere’ doesn’t permit a total free-for-all of information flow (some kinds of information flow in public are against social norms!), so too does the ‘private sector’ not involve complete secrecy from the public.

As a hypothesis, we can posit that what makes the private sector different is that the relevant social structures are less open in their relations with each other than they are in the public sphere. We can imagine an autonomous social entity like a biological cell. Internally it may have a lot of interesting structure and organelles. Its membrane prevents this complexity leaking out into the aether, or plasma, or whatever it is that human cells float around in. Indeed, this membrane is necessary for the proper functioning of the organelles, which in turn allows the cell to interact properly with other cells to form a larger organism. Echoes of Francisco Varela.

It’s interesting that this may actually be a quantifiable difference. One way of modeling the difference between the internal and external-facing complexity of an entity is using information theory. The more complex internal state of the entity has higher entropy than the membrane. The fact that the membrane causally mediates interactions between the internals and the environment prevents information flow between them; this is captured by the Data Processing Inequality. The lack of information flow between the system internals and externals is quantified as lower mutual information between the two domains. At zero mutual information, the two domains are statistically independent of each other.

I haven’t worked out all the implications of this.

References

Benthall, Sebastian. (2015) Designing Networked Publics for Communicative Action. Jenny Davis & Nathan Jurgenson (eds.) Theorizing the Web 2014 [Special Issue]. Interface 1.1. (link)

Sebastian Benthall, Seda Gürses and Helen Nissenbaum (2017), “Contextual Integrity through the Lens of Computer Science”, Foundations and Trends® in Privacy and Security: Vol. 2: No. 1, pp 1-69. http://dx.doi.org/10.1561/3300000016

Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.