Responsible participation in complex sociotechnical organizations circa 1977 cc @Aelkus @dj_mosfett

by Sebastian Benthall

Many extant controversies around technology were documented in 1977 by Langdon Winner in Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought. I would go so far as to say most extant controversies, but I don’t think he does anything having to do with gender, for example.

Consider this discussion of moral education of engineers:

“The problems for moral agency created by the complexity of technical systems cast new light on contemporary calls for more ethically aware scientists and engineers. According to a very common and laudable view, part of the education of persons learning advanced scientific skills ought to be a full comprehension of the social implications of their work. Enlightened professionals should have a solid grasp of ethics relevant to their activities. But, one can ask, what good will it do to nourish this moral sensibility and then place the individual in an organizational situation that mocks the very idea of responsible conduct? To pretend that the whole matter can be settled in the quiet reflections of one’s soul while disregarding the context in which the most powerful opportunities for action are made available is a fundamental misunderstanding of the quality genuine responsibility must have.”

A few thoughts.

First, this reminds me of a conversation @Aelkus @dj_mosfett and I had the other day. The question was: who should take moral responsibility for the failures of sociotechnical organizations (conceived of as corporations running a web service technology, for example).

Second, I’ve been convinced again lately (reminded?) of the importance of context. I’ve been looking into Chaiklin and Lave’s Understanding Practice again, which is largely about how it’s important to take context into account when studying any social system that involves learning. More recently than that I’ve been looking into Nissenbaum’s contextual integrity theory. According to her theory, which is now widely used in the design and legal privacy literature, norms of information flow are justified by the purpose of the context in which they are situated. So, for example, in an ethnographic context those norms of information flow most critical for maintaining trusted relationships with one’s subjects are most important.

But in a corporate context, where the purpose of ones context is to maximize shareholder value, wouldn’t the norms of information flow favor those who keep the moral failures of their organization shrouded in the complexity of their machinery be perfectly justified in their actions?

I’m not seriously advocating for this view, of course. I’m just asking it rhetorically, as it seems like it’s a potential weakness in contextual integrity theory that it does not endorse the actions of, for example, corporate whistleblowers. Or is it? Are corporate whistleblowers the same as national whistleblowers? Of Wikileaks?

One way around this would be to consider contexts to be nested or overlapping, with ethics contextualize to those “spaces.” So, a corporate whistleblower would be doing something bad for the company, but good for society, assuming that there wasn’t some larger social cost to the loss of confidence in that company. (It occurs to me that in this sort of situation, perhaps threatening internally to blow the whistle unless the problem is solved would be the responsible strategy. As they say,

Making progress with the horns is permissible
Only for the purpose of punishing one’s own city.

)

Anyway, it’s a cool topic to think about, what an information theoretic account of responsibility would look like. That’s tied to autonomy. I bet it’s doable.

Advertisements