Digifesto

Tag: sociotechnical systems

computational institutions

As the “AI ethics” debate metastasizes in my newsfeed and scholarly circles, I’m struck by the frustrations of technologists and ethicists who seem to be speaking past each other.

While these tensions play out along disciplinary fault-lines, for example, between technologists and science and technology studies (STS), the economic motivations are more often than not below the surface.

I believe this is to some extent a problem of the nomenclature, which is again the function of the disciplinary rifts involved.

Computer scientists work, generally speaking, on the design and analysis of computational systems. Many see their work as bounded by the demands of the portability and formalizability of technology (see Selbst et al., 2019). That’s their job.

This is endlessly unsatisfying to critics of the social impact of technology. STS scholars will insist on changing the subject to “sociotechnical systems”, a term that means something very general: the assemblage of people and artifacts that are not people. This, fairly, removes focus from the computational system and embeds it in a social environment.

A goal of this kind of work seems to be to hold computational systems, as they are deployed and used socially, accountable. It must be said that once this happens, we are no longer talking about the specialized domain of computer science per se. It is a wonder why STS scholars are so often picking fights with computer scientists, when their true beef seems to be with businesses that use and deploy technology.

The AI Now Institute has attempted to rebrand the problem by discussing “AI Systems” as, roughly, those sociotechnical systems that use AI. This is one the one hand more specific–AI is a particular kind of technology, and perhaps it has particular political consequences. But their analysis of AI systems quickly overflows into sweeping claims about “the technology industry”, and it’s clear that most of their recommendations have little to do with AI, and indeed are trying, once again, to change the subject from discussion of AI as a technology (a computer science research domain) to a broader set of social and political issues that do, in fact, have their own disciplines where they have been researched for years.

The problem, really, is not that any particular conversation is not happening, or is being excluded, or is being shut down. The problem is that the engineering focused conversation about AI-as-a-technology has grown very large and become an awkward synecdoche for the rise of major corporations like Google, Apple, Amazon, Facebook, and Netflix. As these corporations fund and motivate a lot of research, there’s a question of who is going to get pieces of the big pie of opportunity these companies represent, either in terms of research grants or impact due to regulation, education, etc.

But there are so many aspects of these corporations that are neither addressed by the terms “sociotechnical system”, which is just so broad, and “AI System”, which is as broad and rarely means what you’d think it does (that the system uses AI is incidental if not unnecessary; what matters is that it’s a company operating in a core social domain via primarily technological user interfaces). Neither of these gets at the unit of analysis that’s really of interest.

An alternative: “computational institution”. Computational, in the sense of computational cognitive science and computational social science: it denotes the essential role of theory of computation and statistics in explaining the behavior of the phenomenon being studied. “Institution”, in the sense of institutional economics: the unit is a firm, which is comprised of people, their equipment, and their economic relations, to their suppliers and customers. An economic lens would immediately bring into focus “the data heist” and the “role of machines” that Nissenbaum is concerned are being left to the side.

Some research questions

Last week was so interesting. Some weeks you just get exposed to so many different ideas that it’s trouble to integrate them. I tried to articulate what’s been coming up as a result. It’s several difficult questions.

  • Assuming trust is necessary for effective context management, how does one organize sociotechnical systems to provide social equity in a sustainable way?
  • Assuming an ecology of scientific practices, what are appropriate selection mechanisms (or criteria)? Are they transcendent or immanent?
  • Given the contradictory character of emotional reality, how can psychic integration occur without rendering one dead or at least very boring?
  • Are there limitations of the computational paradigm imposed by data science as an emerging pan-constructivist practice coextensive with the limits of cognitive or phenomenological primitives?

Some notes:

  • I think that two or three of these questions above may be in essence the same question. In that they can be formalized into the same mathematical problem, and the solution is the same in each case.
  • I really do have to read Isabelle Stengers and Nancy Nersessian. Based on the signals I’m getting, they seem to be the people most on top of their game in terms of understanding how science happens.
  • I’ve been assuming that trust relations are interpersonal but I suppose they can be interorganizational as well, or between a person and an organization. This gets back to a problem I struggle with in a recurring way: how do you account for causal relationships between a macro-organism (like an organization or company) and a micro-organism? I think it’s when there are entanglements between these kinds of entities that we are inclined to call something an “ecosystem”, though I learned recently that this use of the term bothers actual ecologists (no surprise there). The only things I know about ecology are from reading Ulanowicz papers, but those have been so on point and beautiful that I feel I can proceed with confidence anyway.
  • I don’t think there’s any way to get around having at least a psychological model to work with when looking at these sorts of things. A recurring an promising angle is that of psychic integration. Carl Jung, who has inspired clinical practices that I can personally vouch for, and Gregory Bateson both understood the goal of personal growth to be integration of disparate elements. I’ve learned recently from Turner’s The Democratic Surround that Bateson was a more significant historical figure than I thought, unless Turner’s account of history is a glorification of intellectuals that appeal to him, which is entirely possible. Perhaps more importantly to me, Bateson inspired Ulanowicz, and so these theories are compatible; Bateson was also a cyberneticist following Wiener, who was prescient and either foundational to contemporary data science or a good articulator of its roots. But there is also a tie-in to constructivist epistemology. DiSessa’s epistemology, building on Piaget but embracing what he calls the computational metaphor, understands the learning of math and physics as the integration of phenomenological primitives.
  • The purpose of all this is ultimately protocol design.
  • This does not pertain directly to my dissertation, though I think it’s useful orienting context.