Digifesto

Tag: deontology

About ethics and families

Most of the great historical philosophers did not have children.

I can understand why. For much of my life, I’ve been propelled by a desire to understand certain theoretical fundamentals of knowledge, ethics, and the universe. No doubt this has led me to become the scientist I am today. Since becoming a father, I have less time for these questions. I find myself involved in more mundane details of life, and find myself beginning to envy those in what I had previously considered the most banal professions. Fatherhood involves a practical responsibility that comes front-and-center, displacing youthful ideals and speculations.

I’m quite proud to now be working on what are for me rather applied problems. But these problems have deep philosophical roots and I enjoy the thought that I will one day be able to write a mature philosophy as a much older man some time later. For now, I would like to jot down a few notes about how my philosophy has changed.

I write this now because my work is now intersecting with other research done by folks I know are profoundly ethically motivated people. My work on what is prosaically called “technology policy” is crossing into theoretical territory currently occupied by AI Safety researchers of the rationalist or Effective Altruist vein. I’ve encountered these folks before and respect their philosophical rigor, though I’ve never quite found myself in agreement with them. I continue to work on problems in legal theory as well, which always involves straddling the gap between consequentialism and deontological ethics. My more critical colleagues may be skeptical of my move towards quantitative economic methods, as the latter are associated with a politics that has been accused of lacking integrity. In short, I have several reasons to want to explain, to myself at least, why I’m working on the problems I’ve chosen, at least as a matter of my own philosophical trajectory.

So first, a point about logic. The principle of non-contradiction imposes a certain consistency and rigor on thought and encourages a form of universalism of theory and ethics. The internal consistency of the Kantian transcendental subject is the first foundation for deontological ethics. However, for what are essentially limitations of bounded rationality, this gives way in later theory to Habermasian discourse ethics. The internal consistency of the mind is replaced with the condition that to be involved in communicative action is to strive for agreement. Norms form from disinterested communications that collect and transcend the perspectival limits of the deliberators. In theory.

In practice, disinterested communication is all but impossible, and communicative competence is hard to find. At the time of this writing, my son does not yet know how to talk. But he communicates, and we do settle on norms, however transitory. The other day we established that he is not allowed to remove dirt from the big pot with the ficus elastica and deposit in other rooms of the house. This is a small accomplishment, but it highlights how unequal rationality, competence, and authority is not a secondary social aberration. It is a primary condition of life.

So much for deontology. Consequential ethics does not fare much better. Utility has always been a weakly theorized construct. In modern theory, it has been mathematized into something substantively meaningless. It serves mainly to describe behavior, rather than to explain it; it provides little except a just-so-story for a consumerist society which is, sure enough, best at consuming itself. Attempts to link utility to something like psychological pleasure, as was done in the olden days, have bizarre conclusions. Parents are not as happy, studies say, as those without children. So why bother?

Nietzsche was a fierce critic of both Kantian deontological ethics and facile British utilitarianism. He argued that in the face of the absurdity of both systems, the philosopher had to derive new values from the one principle that they could not, logically, deny: life itself. He believed that a new ethics could be derived from the conditions of life, which for him was a process of overcoming resistance in pursuit of other (perhaps arbitrary) goals. Suffering, for Nietzsche, was not a blemish on life; rather, life is sacred enough to justify monstrous amounts of suffering.

Nietzsche went insane and died before he could finish his moral project. He didn’t have kids. If he had, maybe he would have come to some new conclusions about the basis for ethics.

In my humble opinion and limited experience thus far, fatherhood is largely about working to maintain the conditions of life for one’s family. Any attempt at universalism that does not extend to one’s own offspring is a practical contradiction when one considers how one was once a child. The biological chain of being is direct, immediate, and resource intensive in a way too little acknowledged in philosophical theory.

In lieu of individual utility, the reality of family highlights the priority of viability, or the capacity of a complex, living system to maintain itself and its autonomy over time. The theory of viability was developed in the 20th century through the field of cybernetics — for example, by Stafford Beer — though it was never quite successfully formulated or integrated into the now hegemonic STEM disciplines. Nevertheless, viability provides a scientific criterion by which to evaluate social meaning and ethics. I believe that there is still tremendous potential in cybernetics as an answer to longstanding philosophical quandaries, though to truly capture this value certain mathematical claims need to be fleshed out.

However, an admission of the biological connection between human beings cannot eclipse economic realities that, like it or not, have structured human life for thousands of years. And indeed, in these early days of child-rearing, I find myself ill-equipped to address all of my son’s biological needs relative to my wife and instead have a comparative advantage in the economic aspects of his, our, lives. And so my current work, which involves computational macroeconomics and the governance of technology, is in fact profoundly personal and of essential ethical importance. Economics has a reputation today for being a technical and politically compromised discipline. We forget that it was originally, and maybe still is, a branch of moral philosophy deeply engaged with questions of justice precisely because it addresses the conditions of life. This ethical imperative persists despite, or indeed because of, its technical complexity. It may be where STEM can address questions of ethics directly. If only it had the right tools.

In summary, I see promise in the possibility of computational economics, if inspired by some currently marginalized ideas from cybernetics, in satisfactorily addressing some perplexing philosophical questions. My thirsting curiosity, at the very least, is slaked by daily progress along this path. I find in it the mathematical rigor I require. At the same time, there is space in this work for grappling with the troublingly political, including the politics of gender and race, which are both of course inexorably tangled with the reality of families. What does it mean, for the politics of knowledge, if the central philosophical unit and subject of knowledge is not the individual, or the state, or the market, but the family? I have not encountered even the beginning of an answer in all my years of study.

Existentialism in Design: Motivation

There has been a lot of recent work on the ethics of digital technology. This is a broad area of inquiry, but it includes such topics as:

  • The ethics of Internet research, including the Facebook emotional contagion study and the Encore anti-censorship study.
  • Fairness, accountability, and transparnecy in machine learning.
  • Algorithmic price-gauging.
  • Autonomous car trolley problems.
  • Ethical (Friendly?) AI research? This last one is maybe on the fringe…

If you’ve been reading this blog, you know I’m quite passionate about the intersection of philosophy and technology. I’m especially interested in how ethics can inform the design of digital technology, and how it can’t. My dissertation is exploring this problem in the privacy engineering literature.

I have a some dissatisfaction towards this field which I don’t expect to make it into my dissertation. One is that the privacy engineering literature and academic “ethics of digital technology” more broadly tends to be heavily informed by the law, in the sense of courts, legislatures, and states. This is motivated by the important consideration that technology, and especially technologists, should in a lot of cases be compliant with the law. As a practical matter, it certainly spares technologists the trouble of getting sued.

However, being compliant with the law is not precisely the same things as being ethical. There’s a long ethical tradition of civil disobedience (certain non-violent protest activities, for example) which is not strictly speaking legal though it has certainly had impact on what is considered legal later on. Meanwhile, the point has been made but maybe not often enough that legal language often looks like ethical language, but really shouldn’t be interpreted that way. This is a point made by Oliver Wendell Holmes Junior in his notable essay, “The Path of the Law”.

When the ethics of technology are not being framed in terms of legal requirements, they are often framed in terms of one of two prominent ethical frameworks. One framework is consequentialism: ethics is a matter of maximizing the beneficial consequences and minimizing the harmful consequences of ones actions. One variation of consequentialist ethics is utilitarianism, which attempts to solve ethical questions by reducing them to a calculus over “utility”, or benefit as it is experienced or accrued by individuals. A lot of economics takes this ethical stance. Another, less quantitative variation of consequentialist ethics is present in the research ethics principle that research should maximize benefits and minimize harms to participants.

The other major ethical framework used in discussions of ethics and technology is deontological ethics. These are ethics that are about rights, duties, and obligations. Justifying deontological ethics can be a little trickier than justifying consequentialist ethics. Frequently this is done by invoking social norms, as in the case of Nissenbaum’s contextual integrity theory. Another variation of a deontological theory of ethics is Habermas’s theory of transcendental pragmatics and legitimate norms developed through communicative action. In the ideal case, these norms become encoded into law, though it is rarely true that laws are ideal.

Consequentialist considerations probably make the world a better place in some aggregate sense. Deontological considerations probably maybe the world a fairer or at least more socially agreeable place, as in their modern formulations they tend to result from social truces or compromises. I’m quite glad that these frameworks are taken seriously by academic ethicists and by the law.

However, as I’ve said I find these discussions dissatisfying. This is because I find both consequentialist and deontological ethics to be missing something. They both rely on some foundational assumptions that I believe should be questioned in the spirit of true philosophical inquiry. A more thorough questioning of these assumptions, and tentative answers to them, can be found in existentialist philosophy. Existentialism, I would argue, has not had its due impact on contemporary discourse on ethics and technology, and especially on the questions surrounding ethical technical design. This is a situation I intend to one day remedy. Though Zach Weinersmith has already made a fantastic start:

“Self Driving Car Ethics”, by Weinersmith

SMBC: Autonomous vehicle ethics

What kinds of issues would be raised by existentialism in design? Let me try out a few examples of points made in contemporary ethics of technology discourse and a preliminary existentialist response to them.

Ethical Charge Existentialist Response
A superintelligent artificial intelligence could, if improperly designed, result in the destruction or impairment of all human life. This catastrophic risk must be avoided. (Bostrom, 2014) We are all going to die anyway. There is no catastrophic risk; there is only catastrophic certainty. We cannot make an artificial intelligence that prevents this outcome. We must instead design artificial intelligence that makes life meaningful despite its finitude.
Internet experiments must not direct the browsers of unwitting people to test the URLs of politically sensitive websites. Doing this may lead to those people being harmed for being accidentally associated with the sensitive material. Researchers should not harm people with their experiments. (Narayanan and Zevenbergen, 2015) To be held responsible by a state’s criminal justice system for the actions taken by ones browser, controlled remotely from America, is absurd. This absurdity, which pervades all life, is the real problem, not the suffering potentially caused by the experiment (because suffering in some form is inevitable, whether it is from painful circumstance or from ennui.) What’s most important is the exposure of this absurdity and the potential liberation from false moralistic dogmas that limit human potential.
Use of Big Data to sort individual people, for example in the case of algorithms used to choose among applicants for a job, may result in discrimination against historically disadvantaged and vulnerable groups. Care must be taken to tailor machine learning algorithms to adjust for the political protection of certain classes of people. (Barocas and Selbst, 2016) The egalitarian tendency in ethics which demands that the greatest should invest themselves in the well-being of the weakest is a kind of herd morality, motivated mainly by ressentiment of the disadvantaged who blame the powerful for their frustrations. This form of ethics, which is based on base emotions like pity and envy, is life-negating because it denies the most essential impulse of life: to overcome resistance and to become great. Rather than restrict Big Data’s ability to identify and augment greatness, it should be encouraged. The weak must be supported out of a spirit of generosity from the powerful, not from a curtailment of power.

As a first cut at existentialism’s response to ethical concerns about technology, it may appear that existentialism is more permissive about the use and design of technology than consequentialism and deontology. It is possible that this conclusion will be robust to further investigation. There is a sense in which existentialism may be the most natural philosophical stance for the technologist because a major theme in existentialist thought is the freedom to choose ones values and the importance of overcoming the limitations on ones power and freedom. I’ve argued before that Simone de Beauvoir, who is perhaps the most clear-minded of the existentialists, has the greatest philosophy of science because it respects this purpose of scientific research. There is a vivacity to existentialism that does not sweat the small stuff and thinks big while at the same time acknowledging that suffering and death are inevitable facts of life.

On the other hand, existentialism is a morally demanding line of inquiry precisely because it does not use either easy metaethical heuristics (such as consequentialism or deontology) or the bald realities of the human condition as a stopgap. It demands that we tackle all the hard questions, sometimes acknowledging that they are answerable or answerable only in the negative, and muddle on despite the hardest truths. Its aim is to provide a truer, better morality than the alternatives.

Perhaps this is best illustrated by some questions implied by my earlier “existentialist responses” that address the currently nonexistent field of existentialism in design. These are questions I haven’t yet heard asked by scholars at the intersection of ethics and technology.

  • How could we design an artificial intelligence (or, to make it simpler, a recommendation system) that makes the most meaningful choices for its users?
  • What sort of Internet intervention would be most liberatory for the people affected by it?
  • What technology can best promote generosity from the world’s greatest people as a celebration of power and life?

These are different questions from any that you read about in the news or in the ethical scholarship. I believe they are nevertheless important ones, maybe more important than the ethical questions that are more typically asked. The theoretical frameworks employed by most ethicists make assumptions that obscure what everybody already knows about the distribution of power and its abuses, the inevitability of suffering and death, life’s absurdity and especially the absurdity if moralizing sentiment in the face of the cruelty of reality, and so on. At best, these ethical discussions inform the interpretation and creation of law, but law is not the same as morality and to confuse the two robs morality of what is perhaps most essential component, which is that is grounded meaningfully in the experience of the subject.

In future posts (and, ideally, eventually in a paper derived from those posts), I hope to flesh out more concretely what existentialism in design might look like.

References

Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact.

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. OUP Oxford.

Narayanan, A., & Zevenbergen, B. (2015). No Encore for Encore? Ethical questions for web-based censorship measurement.

Weinersmith, Z. “Self Driving Car Ethics”. Saturday Morning Breakfast Cereal.