cultural values in design

by Sebastian Benthall

As much as I would like to put aside the problem of technology criticism and focus on my empirical work, I find myself unable to avoid the topic. Today I was discussing work with a friend and collaborator who comes from a ‘critical’ perspective. We were talking about ‘values in design’, a subject that we both care about, despite our different backgrounds.

I suggested that one way to think about values in design is to think of a number of agents and their utility functions. Their utility functions capture their values; the design of an artifact can have greater or less utility for the agents in question. They may intentionally or unintentionally design artifacts that serve some but not others. And so on.

Of course, thinking in terms of ‘utility functions’ is common among engineers, economists, cognitive scientists, rational choice theorists in political science, and elsewhere. It is shunned by the critically trained. My friend and colleague was open minded in his consideration of utility functions, but was more concerned with how cultural values might sneak into or be expressed in design.

I asked him to define a cultural value. We debated the term for some time. We reached a reasonable conclusion.

With such a consensus to work with, we began to talk about how such a concept would be applied. He brought up the example of an algorithm claimed by its creators to be objective. But, he asked, could the algorithm have a bias? Would we not expect that it would express, secretly, cultural values?

I confessed that I aspire to design and implement just such algorithms. I think it would be a fine future if we designed algorithms to fairly and objectively arbitrate our political disputes. We have good reasons to think that an algorithm could be more objective than a system of human bureaucracy. While human decision-makers are limited by the partiality of their perspective, we can build infrastructure that accesses and processes data that are beyond an individual’s comprehension. The challenge is to design the system so that it operates kindly and fairly despite its operations being beyond the scope a single person’s judgment. This will require an abstracted understanding of fairness that is not grounded in the politics of partiality.

Suppose a team of people were to design and implement such a program. On what basis would the critics–and there would inevitably be critics–accuse it of being a biased design with embedded cultural values? Besides the obvious but empty criticism that valuing unbiased results is a cultural value, why wouldn’t the reasoned process of design reduce bias?

We resumed our work peacefully.