Digifesto

Tag: machine ethics

The ontology of software, revisited

I’m now a software engineer again after many years doing and studying other things. My first-person experience, my phenomenological relationship with this practice, is different this time around. I’ve been meaning to jot down some notes based on that fresh experience. Happily, there’s resonance with topics of my academic focus as well. I’m trying to tease out these connections.

To briefly recap: There’s a recurring academic discourse around technology ethics. Roughly speaking, it starts with a concern about a newish technology that has media or funding agency interest. Articles then get written capitalizing on this hot topic; these articles are fractured according to the disciplinary background of their authors.

  • Engineers try to come up with an improved version of the technology.
  • Lawyers try to come up with ways to regulate the production and use of the technology broadly speaking.
  • Organizational sociologists come up with institutional practices (‘ethics boards’, ‘contestability’) which would prevent the technology from being misused.
  • Critical theorists argue that the technology would be less worrisome if representational desiderata within the field of technology production were better.
  • … and so on.

This is a very active and interesting discourse, but from my (limited) perspective, is rarely impacts industry practice. This isn’t because people in industry don’t care about the ethical implications of their work. It’s because people in industry are engaged full-time in a different discourse. This is the discourse of industry practitioners.

My industrial background is in software development and data science. Obviously there are other kinds of industrial work–hardware, biotech, etc. But it’s fair to say that a great deal of the production of “technology” in the 21st century is, specifically, software development. And my point here is that software development has its own field of discourse that is rich and vivid and a full-time job to keep up with. Here’s some examples of what I’m getting at:

  • There is always-already a huge world of communication between engineers about what technologies are interesting, how to use them effectively, how they compare with prior technologies, the implications of these trends for technical careers, and so on. Browse Hacker News. Look at industry software conferences.
  • There’s also a huge world of industrial discussion about the social practices of software development. A lot of my knowledge of this is a bit dated. But as I come back to industry, I find myself looking back to now Classic sources on how-to-work-effectively-on-software. I’m linking to articles from Joel Spolsky’s blog. I’m ordering a copy of Fred Brooks’s classic The Mythical Man-Month.
  • I’m reading documentation, endlessly, about how to configure and use the various SaaS, IaaS, PaaS, etc. tools that are now necessary parts of full-stack development. When the documentation is limited, I’m engaging with customer service people of technical products, who have their own advice, practices, etc.

This is a complex world of literature and practice. Part of what makes it complex is that it is always-already densely documented and self-referential, enacted by smart and literate people, most of whom are quite socially skilled. It’s people working full-time jobs in a field that is now over 40 years old.

I’ve argued in other posts that if we want to solve the ‘technology ethics’ problem, we should see it as an economic problem. At a high level, I still believe that’s true. I want to qualify that point though, and say: now that I’m back in a more engage position with respect to the field of technical production, I believe there are institutional/organizational ways to address broader social concerns through interventions on engineering practice.

What is missing, in my view, is a sincere engagement with the nitty-gritty of engineering practice itself. I know there are anthropologists who think they do this. I haven’t read anybody who really does it, in their writing, and I believe the reason for that is: anthropologists writing for other academic anthropologists are not going to write what would be actually useful here, which is a guide for product and project management that would likely recapitulate a lot of conventional (but too often ignored) wisdom about software engineering “best practices”–documentation, testing, articulation of use cases, etc. These are the kinds of things that improve technical quality in a real way.

Now that I write this, I recall that the big ethics research teams at, say, Google, do stuff like this. It’s great.


I was going to say something about the ontology of software.

Recall: I have a position on the ontology of data, which I’ve called Situated Information Flow Theory (SIFT). I worked hard on it. According to SIFT, an information flow is a causal flow situated in a network of other causal relations. The meaning of the information depends on that causally defined situation.

What then is software?

“Software” refers to sets of instructions written by people in a specialized “programming” language as text data, which is then interpreted and compiled by a machine. In paradigmatic industrial practice (I’m simplifying, bear with me), ultimately these instructions will be used to control the behavior of a machine that interfaces with the world in a real-time, consequential way. This latter machine is referred to, internally, as being “in production”.

When you’re programming a technical product, first you write software “in development”. You are writing drafts of code. You get your colleagues to review it. You link up the code you wrote to the code the other team wrote and you see if it works together. There is a long and laborious process of building tests for new requirements and fixing the code so that it meets those requirements. There are designs, and redesigns, of internal and external facing features. The complexity of the total task is divided up into modules; the boundaries of those modules shifts over time. The social structure of the team adapts as new modules become necessary.

There is an isomorphism, a well documented phenomenon in organizational social theory, between the technology being created and the social structure that creates it. The team structure mirrors the software architecture.

When the pieces are in place adequately enough–and when the investors/management has grown impatient enough–the software is finally “deployed to production”. It “goes live”. What was an internal exercise is now a process with reputational consequences for the business, as well as possibly real consequences for the users of the technology.

Inevitably, the version of the product “in production” is not complete. There are errors. There are new features requested. So the technology firm now organizes itself around several “cycles” running at different frequencies in parallel. There’s a “development cycle” of writing new software code. There’s a “release cycle” of packaging new improvements into bundles that are documented and tested for quality. The releases are deployed to production on a schedule. Different components may have different development and release cycles. The impedance match or mismatch between these cycles becomes its own source of robustness or risk. (I’ve done some empirical research work on this.)

What does this mean for the ontology of software?

The first thing it means is that the notion that software is a static artifact, something like either a physical object (like a bicycle) or a publication (like a book) is mostly irrelevant to what’s happening. The software production process depends on the fluidity of source code. When software is deployed “as a service”, it’s dubious for it to qualify as a “creative work”, subject to copyright law, except by virtue of legal inertia. Something totally different is going on.

The second thing it means is that the live technical product is an ongoing institutional accomplishment. It’s absurd to ever say that humans are not “in the loop”. This is one of the big insights of the critical/anthro reaction to “Big Tech” in the past five years or so. But it has also been common knowledge within the industry for fifteen years or so.

The third thing it means is that software is the structuring of a system of causal relations. Software, when it’s deployed, determines what causes what. See above for a definition of the the nature of information: it’s a causal flow situated in other causal relations. The link between software and information then is quite clear and direct. Software (as far as it goes) is a definition of a causal situation.

The fourth thing it means is that software products are the result of agreement between people. Software only makes it into production if it has gotten there through agreed-upon processes by the team that deploys it. The strength of software is in the collective input that went into it. In a sense, software is much more like a contract, in legal terms, than it is like a creative work. In the extended network of human and machine actors, software is the result of, the expression of, self-regulation first. Only secondarily does it, in Lessig’s terms, become a regulatory force more broadly.

What is software? Software is a form of social structure.

Existentialism in Design: Motivation

There has been a lot of recent work on the ethics of digital technology. This is a broad area of inquiry, but it includes such topics as:

  • The ethics of Internet research, including the Facebook emotional contagion study and the Encore anti-censorship study.
  • Fairness, accountability, and transparnecy in machine learning.
  • Algorithmic price-gauging.
  • Autonomous car trolley problems.
  • Ethical (Friendly?) AI research? This last one is maybe on the fringe…

If you’ve been reading this blog, you know I’m quite passionate about the intersection of philosophy and technology. I’m especially interested in how ethics can inform the design of digital technology, and how it can’t. My dissertation is exploring this problem in the privacy engineering literature.

I have a some dissatisfaction towards this field which I don’t expect to make it into my dissertation. One is that the privacy engineering literature and academic “ethics of digital technology” more broadly tends to be heavily informed by the law, in the sense of courts, legislatures, and states. This is motivated by the important consideration that technology, and especially technologists, should in a lot of cases be compliant with the law. As a practical matter, it certainly spares technologists the trouble of getting sued.

However, being compliant with the law is not precisely the same things as being ethical. There’s a long ethical tradition of civil disobedience (certain non-violent protest activities, for example) which is not strictly speaking legal though it has certainly had impact on what is considered legal later on. Meanwhile, the point has been made but maybe not often enough that legal language often looks like ethical language, but really shouldn’t be interpreted that way. This is a point made by Oliver Wendell Holmes Junior in his notable essay, “The Path of the Law”.

When the ethics of technology are not being framed in terms of legal requirements, they are often framed in terms of one of two prominent ethical frameworks. One framework is consequentialism: ethics is a matter of maximizing the beneficial consequences and minimizing the harmful consequences of ones actions. One variation of consequentialist ethics is utilitarianism, which attempts to solve ethical questions by reducing them to a calculus over “utility”, or benefit as it is experienced or accrued by individuals. A lot of economics takes this ethical stance. Another, less quantitative variation of consequentialist ethics is present in the research ethics principle that research should maximize benefits and minimize harms to participants.

The other major ethical framework used in discussions of ethics and technology is deontological ethics. These are ethics that are about rights, duties, and obligations. Justifying deontological ethics can be a little trickier than justifying consequentialist ethics. Frequently this is done by invoking social norms, as in the case of Nissenbaum’s contextual integrity theory. Another variation of a deontological theory of ethics is Habermas’s theory of transcendental pragmatics and legitimate norms developed through communicative action. In the ideal case, these norms become encoded into law, though it is rarely true that laws are ideal.

Consequentialist considerations probably make the world a better place in some aggregate sense. Deontological considerations probably maybe the world a fairer or at least more socially agreeable place, as in their modern formulations they tend to result from social truces or compromises. I’m quite glad that these frameworks are taken seriously by academic ethicists and by the law.

However, as I’ve said I find these discussions dissatisfying. This is because I find both consequentialist and deontological ethics to be missing something. They both rely on some foundational assumptions that I believe should be questioned in the spirit of true philosophical inquiry. A more thorough questioning of these assumptions, and tentative answers to them, can be found in existentialist philosophy. Existentialism, I would argue, has not had its due impact on contemporary discourse on ethics and technology, and especially on the questions surrounding ethical technical design. This is a situation I intend to one day remedy. Though Zach Weinersmith has already made a fantastic start:

“Self Driving Car Ethics”, by Weinersmith

SMBC: Autonomous vehicle ethics

What kinds of issues would be raised by existentialism in design? Let me try out a few examples of points made in contemporary ethics of technology discourse and a preliminary existentialist response to them.

Ethical Charge Existentialist Response
A superintelligent artificial intelligence could, if improperly designed, result in the destruction or impairment of all human life. This catastrophic risk must be avoided. (Bostrom, 2014) We are all going to die anyway. There is no catastrophic risk; there is only catastrophic certainty. We cannot make an artificial intelligence that prevents this outcome. We must instead design artificial intelligence that makes life meaningful despite its finitude.
Internet experiments must not direct the browsers of unwitting people to test the URLs of politically sensitive websites. Doing this may lead to those people being harmed for being accidentally associated with the sensitive material. Researchers should not harm people with their experiments. (Narayanan and Zevenbergen, 2015) To be held responsible by a state’s criminal justice system for the actions taken by ones browser, controlled remotely from America, is absurd. This absurdity, which pervades all life, is the real problem, not the suffering potentially caused by the experiment (because suffering in some form is inevitable, whether it is from painful circumstance or from ennui.) What’s most important is the exposure of this absurdity and the potential liberation from false moralistic dogmas that limit human potential.
Use of Big Data to sort individual people, for example in the case of algorithms used to choose among applicants for a job, may result in discrimination against historically disadvantaged and vulnerable groups. Care must be taken to tailor machine learning algorithms to adjust for the political protection of certain classes of people. (Barocas and Selbst, 2016) The egalitarian tendency in ethics which demands that the greatest should invest themselves in the well-being of the weakest is a kind of herd morality, motivated mainly by ressentiment of the disadvantaged who blame the powerful for their frustrations. This form of ethics, which is based on base emotions like pity and envy, is life-negating because it denies the most essential impulse of life: to overcome resistance and to become great. Rather than restrict Big Data’s ability to identify and augment greatness, it should be encouraged. The weak must be supported out of a spirit of generosity from the powerful, not from a curtailment of power.

As a first cut at existentialism’s response to ethical concerns about technology, it may appear that existentialism is more permissive about the use and design of technology than consequentialism and deontology. It is possible that this conclusion will be robust to further investigation. There is a sense in which existentialism may be the most natural philosophical stance for the technologist because a major theme in existentialist thought is the freedom to choose ones values and the importance of overcoming the limitations on ones power and freedom. I’ve argued before that Simone de Beauvoir, who is perhaps the most clear-minded of the existentialists, has the greatest philosophy of science because it respects this purpose of scientific research. There is a vivacity to existentialism that does not sweat the small stuff and thinks big while at the same time acknowledging that suffering and death are inevitable facts of life.

On the other hand, existentialism is a morally demanding line of inquiry precisely because it does not use either easy metaethical heuristics (such as consequentialism or deontology) or the bald realities of the human condition as a stopgap. It demands that we tackle all the hard questions, sometimes acknowledging that they are answerable or answerable only in the negative, and muddle on despite the hardest truths. Its aim is to provide a truer, better morality than the alternatives.

Perhaps this is best illustrated by some questions implied by my earlier “existentialist responses” that address the currently nonexistent field of existentialism in design. These are questions I haven’t yet heard asked by scholars at the intersection of ethics and technology.

  • How could we design an artificial intelligence (or, to make it simpler, a recommendation system) that makes the most meaningful choices for its users?
  • What sort of Internet intervention would be most liberatory for the people affected by it?
  • What technology can best promote generosity from the world’s greatest people as a celebration of power and life?

These are different questions from any that you read about in the news or in the ethical scholarship. I believe they are nevertheless important ones, maybe more important than the ethical questions that are more typically asked. The theoretical frameworks employed by most ethicists make assumptions that obscure what everybody already knows about the distribution of power and its abuses, the inevitability of suffering and death, life’s absurdity and especially the absurdity if moralizing sentiment in the face of the cruelty of reality, and so on. At best, these ethical discussions inform the interpretation and creation of law, but law is not the same as morality and to confuse the two robs morality of what is perhaps most essential component, which is that is grounded meaningfully in the experience of the subject.

In future posts (and, ideally, eventually in a paper derived from those posts), I hope to flesh out more concretely what existentialism in design might look like.

References

Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact.

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. OUP Oxford.

Narayanan, A., & Zevenbergen, B. (2015). No Encore for Encore? Ethical questions for web-based censorship measurement.

Weinersmith, Z. “Self Driving Car Ethics”. Saturday Morning Breakfast Cereal.

reflexive control

A theory I wish I had more time to study in depth these days is the Soviet field of reflexive control (see for example this paper by Timothy Thomas on the subject).

Reflexive control is defined as a means of conveying to a partner or an opponent specially prepared information to incline him to voluntarily make the predetermined decision desired by the initiator of the action. Even though the theory was developed long ago in Russia, it is still undergoing further refinement. Recent proof of this is the development in February 2001, of a new Russian journal known as Reflexive Processes and Control. The journal is not simply the product of a group of scientists but, as the editorial council suggests, the product of some of Russia’s leading national security institutes, and boasts a few foreign members as well.

While the paper describes the theory in broad strokes, I’m interested in how one would formalize and operationalize reflexive control. My intuitions thus far are like this: traditional control theory assumes that the controlled system is inanimate or at least not autonomous. The controlled system is steered, often dynamically, to some optimal state. But in reflexive control, the assumption is that the controlled system is autonomous and has a decision-making process or intelligence. Therefore reflexive control is a theory of influence, perhaps deception. Going beyond mere propaganda, it seems like reflexive control can be highly reactive, taking into account the reaction time of other agents in the field.

There are many examples, from a Russian perspective, of the use of reflexive control theory during conflicts. One of the most recent and memorable was the bombing of the market square in Sarejevo in 1995. Within minutes of the bombing, CNN and other news outlets were reporting that a Serbian mortar attack had killed many innocent people in the square. Later, crater analysis of the shells that impacted in the square, along with other supporting evidence, indicated that the incident did not happen as originally reported. This evidence also threw into doubt the identities of the perpetrators of the attack. One individual close to the investigation, Russian Colonel Andrei Demurenko, Chief of Staff of Sector Sarejevo at the time, stated, “I am not saying the Serbs didn’t commit this atrocity. I am saying that it didn’t happen the way it was originally reported.” A US and Canadian officer soon backed this position. Demurenko believed that the incident was an excellent example of reflexive control, in that the incident was made to look like it had happened in a certain way to confuse decision-makers.

Thomas’s article points out that the notable expert in reflexive control in the United States is V. A. Lefebvre, a Soviet ex-pat and mathematical psychologist at UC Irvine. He is listed on a faculty listing but doesn’t seem to have a personal home page. His wikipedia page says that reflexive theory is like the Soviet alternative to game theory. That makes sense. Reflexive theory has been used by Lefebvre to articulate a mathematical ethics, which is surely relevant to questions of machine ethics today.

Beyond its fascinating relevance to many open research questions in my field, it is interesting to see in Thomas’s article how “reflexive control” seems to capture so much of what is considered “cybersecurity” today.

One of the most complex ways to influence a state’s information resources is by use of reflexive control measures against the state’s decision-making processes. This aim is best accomplished by formulating certain information or disinformation designed to affect a specific information resource best. In this context an information resource is defined as:

  • information and transmitters of information, to include the method or technology of obtaining, conveying, gathering, accumulating, processing, storing, and exploiting that information;
  • infrastructure, including information centers, means for automating information processes, switchboard communications, and data
    transfer networks;
  • programming and mathematical means for managing information;
  • administrative and organizational bodies that manage information processes, scientific personnel, creators of data bases and knowledge, as well as personnel who service the means of informatizatsiya [informatization].

Unlike many people, I don’t think “cybersecurity” is very hard to define at all. The prefix “cyber-” clearly refers to the information-based control structures of a system, and “security” is just the assurance of something against threats. So we might consider “reflexive control” to be essentially equivalent to “cybersecurity”, except with an emphasis on the offensive rather than defensive aspects of cybernetic control.

I have yet to find something describing the mathematical specifics of the theory. I’d love to find something and see how it compares to other research in similar fields. It would be fascinating to see where Soviet and Anglophone research on these topics is convergent, and where it diverges.

frustrations with machine ethics

It’s perhaps because of the contemporary two cultures problem of tech and the humanities that machine ethics is in such a frustrating state.

Today I read danah boyd’s piece in The Message about technology as an arbiter of fairness. It’s more baffling conflation of data science with neoliberalism. This time, the assertion was that the ideology of the tech industry is neoliberalism hence their idea of ‘fairness’ is individualist and against social fabric. It’s not clear what backs up these kinds of assertions. They are more or less refuted by the fact that industrial data science is obsessed with our network of ties for marketing reasons. If anybody understands the failure of the myth of the atomistic individual, it’s “tech folks,” a category boyd uses to capture, I guess, everyone from marketing people at Google to venture capitalists to startup engineers to IBM researchers. You know, the homogenous category that is “tech folks.”

This kind of criticism makes the mistake of thinking that a historic past is the right way to understand a rapidly changing present that is often more technically sophisticated than the critics understand. But critical academics have fallen into the trap of critiquing neoliberalism over and over again. One problem is that tech folks don’t spend a ton of time articulating their ideology in ways that are convenient for pop culture critique. Often their business models require rather sophisticated understandings of the market, etc. that don’t fit readily into that kind of mold.

What’s needed is substantive progress in computational ethics. Ok, so algorithms are ethically and politically important. What politics would you like to see enacted, and how do you go about implementing that? How do you do it in a way that attracts new users and is competitively funded so that it can keep up with the changing technology with which we use to access the web? These are the real questions. There is so little effort spent trying to answer them. Instead there’s just an endless series of op-ed bemoaning the way things continue to be bad because it’s easier than having agency about making things better.