Digifesto

Tag: existentialism in design

A short introduction to existentialism

I’ve been hinting that a different moral philosophical orientation towards technical design, one inspired by existentialism, would open up new research problems and technical possibilities.

I am trying to distinguish this philosophical approach from consequentialist approaches that aim for some purportedly beneficial change in objective circumstances and from deontological approaches that codify the rights and duties of people towards each other. Instead of these, I’m interested in a philosophy that prioritizes individual meaningful subjective experiences. While it is possible that this reduces to a form of consequentialism, because of the shift of focus from objective consequences to individual situations in the phenomenological sense, I will bracket that issue for now and return to it when the specifics of this alternative approach have been fleshed out.

I have yet to define existentialism and indeed it’s not something that’s easy to pin down. Others have done it better than I will ever do; I recommend for example the Stanford Encyclopedia of Philosophy article on the subject. But here is what I am getting at by use of the term, in a nutshell:

In the mid-19th century, there was (according to Badiou) a dearth of good philosophy due to the new prestige of positivism, on the one hand, and the high quality of poetry, on the other. After the death of Hegel, who claimed to have solved all philosophical problems through his phenomenology of Spirit and its corollary, the science of Logic, arts and sciences became independent of each other. And as it happens during such periods, the people (of Europe, we’re talking about now) became disillusioned. The sciences undermined Christian metanarratives that had previously given life its meaningful through the promise of a heavenly afterlife to those who lived according to moral order. There was what has been called by subsequent scholars a “nihilism crisis”.

Friedrich Nietzsche began writing and shaking things up by proposing a new radical form of individualism that placed self-enhancement over social harmony. An important line of argumentation showed that the moral assumptions of conventional philosophy in his day contained contradictions and false promises that would lead the believer to either total disorientation or life-negating despair. What was needed was an alternative, and Nietzsche began working on one. It made the radical step of not grounding morality in abolishing suffering (which he believed was a necessary part of life) but rather in life itself. In his conception, what was most characteristic of life was the will to power, which has been characterized (by Bernard Reginster, I believe) as a second-order desire to overcome resistance in the pursuit of other, first-order desires. In other words, Nietzsche’s morality is based on the principle that the greatest good in life is to overcome adversity.

Nietzsche is considered one of the fathers of existentialist thought (though he is also considered many other things, as he is a writer known for his inconsistency). Another of these foundational thinkers is Søren Kierkegaard. Now that I look him up, I see that his life falls within what Badiou characterizes” the “age of poets” and/or the darkp age of 19th century philosophy, and I wonder if Badiou would consider him an exception. A difficult thing about Kierkegaard in terms of his relevance to today’s secular academic debates is that he was explicitly and emphatically working within a Christian framework. Without going too far into it, it’s worth noting a couple things about his work. In The Sickness Unto Death (1849), Kierkegaard also deals with the subject of despair and its relationship to ones capabilities. For Kierkegaard, a person is caught between their finite (which means “limited” in this context) existence with all of its necessary limitations and their desire to transcend these limitations and attain the impossible, the infinite. In his terminology, he discusses the finite self and the infinite self, because his theology allows for the idea that there is an infinite self, which is God, and that the important philosophical crisis is about establishing ones relationship to God despite the limitations of ones situation. Whereas Nietzsche proposes a project of individual self-enhancement to approach what was impossible, Kierkegaard’s solution is a Christian one: to accept Jesus and God’s love as the bridge between infinite potential and ones finite existence. This is not a universally persuasive solution, though I feel it sets up the problem rather well.

The next great existentialist thinker, and indeed to one who promoted the term “existentialism” as a philosophical brand, is
Jean-Paul Sartre. However, I find Sartre uninspiring and will ignore his work for now.

On the other hand, Simone de Beauvoir, who was closely associated with Sartre, has one of the best books on ethics and the human condition I’ve ever read, the highly readable The Ethics of Ambiguity (1949), the Marxists have kindly put on-line for your reading pleasure. This work lays out the ethical agenda of existentialism in phenomenological terms that resonate well with more contemporary theory. The subject finds itself in a situation (cf. theories of situated learning common now in HCI), in a place and time and a particular body with certain capacities. What is within the boundaries of their conscious awareness and capacity for action is their existence, and they are aware that beyond the boundaries of their awareness is Being, which is everything else. And what the subject strives for is to expand their existence in being, subsuming it. One can see how this synthesizes the positions of Nietzsche and Kierkegaard. Where de Beauvoir goes farther is the demonstration of how one can start from this characterization of the human condition and derive from it an substantive ethics about how subjects should treat each other. It is true that the subject can never achieve the impossible of the infinite…alone. However, by investing themselves through their “projects”, subjects can extend themselves. And when these projects involve the empowerment of others, this allows a finite subject to extend themselves through a larger and less egoistic system of life.

De Beauvoirian ethics are really nice because they are only gently prescriptive, are grounded very closely in the individual’s subjective experience of their situation, and have social justice implications that are appealing to many contemporary liberal intellectuals without grounding these justice claims in resentment or zero-sum claims for reparation or redistribution. Rather, its orientation is the positive-sum, win-win relationship between the one who empowers another and the one being empowered. This is the relationship, not of master and slave, but of master and apprentice.

When I write about existentialism in design, I am talking about using an ethical framework similar to de Beauvoir’s totally underrated existentialist ethics and using them as principles for technical design.

References

Brown, John Seely, Allan Collins, and Paul Duguid. “Situated cognition and the culture of learning.” Educational researcher 18.1 (1989): 32-42.

De Beauvoir, Simone. The ethics of ambiguity, tr. Citadel Press, 1948.

Lave, Jean, and Etienne Wenger. Situated learning: Legitimate peripheral participation. Cambridge university press, 1991.

Advertisements

Subjectivity in design

One of the reason why French intellectuals have developed their own strange way of talking is because they have implicitly embraced a post-Heideggerian phenomenological stance which deals seriously with the categories of experience of the individual subject. Americans don’t take this sort of thing so seriously because our institutions have been more post-positivist and now, increasingly, computationalist. If post-positivism makes the subject of science the powerful bureaucratic institution able leverage statistically sound and methodologically responsible survey methodology, computationalism makes the subject of science the data analyst operating a cloud computing platform with data sourced from wherever. These movements are, probably, increasingly alienating to “regular people”, including humanists, who are attracted to phenomenology precisely because they have all the tools for it already.

To the extent that humanists are best informed about what it really means to live in the world, their position must be respected. It is really out of deference to the humble (or, sometimes, splendidly arrogant) representatives of the human subject as such that I have written about existentialism in design, which is really an attempt to ground technical design in what is philosophically “known” about the human condition.

This approach differs from “human centered design” importantly because human centered design wisely considers design to be an empirically rigorous task that demands sensitivity to the particular needs of situated users. This is wise and perfectly fine except for one problem: it doesn’t scale. And as we all know, the great and animal impulse of technology progress, especially today, is to develop the one technology that revolutionizes everything for everyone, becoming new essential infrastructure that reveals a new era of mankind. Human centered designers have everything right about design except for the maniacal ambition of it, without which it will never achieve technology’s paramount calling. So we will put it to one side and take a different approach.

The problem is that computationalist infrastructure projects, and by this I’m referring to the Googles, the Facebooks, the Amazons, Tencents, the Ali Babas, etc., are essentially about designing efficient machines and so they ultimately become about objective resource allocation in one sense or another. The needs of the individual subject are not as relevant to the designers h of these machines as are the behavioral responses of their users to their use interfaces. What will result in more clicks, more “conversions”? Asking users what they really want on the scale that it would affect actual design is secondary and frivolous when A/B s testing can optimize practical outcomes as efficiently as they do.

I do not mean to cast aspersions at these Big Tech companies by describing their operations so baldly. I do not share the critical perspective of many of my colleagues who write as if they have discovered, for the first time, that corporate marketing is hypocritical and that businesses are mercenary. This is just the way things are; what’s more, the engineering accomplishments involved are absolutely impressive and worth celebrating, as is the business management.

What I would like to do is propose that a technology of similar scale can be developed according to general principles that nevertheless make more adept use of what is known about the human condition. Rather than be devoted to cheap proxies of human satisfaction that address his or her objective condition, I’m proposing a service that delivers something tailored to the subjectivity of the user.

Alain Badiou and artificial intelligence

Last week I saw Alain Badiou speak at NYU on “Philosophy between Mathematics and Poetry”, followed by a comment by Alexander Galloway, and then questions fielded from the audience.

It was wonderful to see Badiou speak as ever since I’ve become acquainted with his work (which was rather recently, Summer of 2016) I have seen it as a very hopeful direction for philosophy. As perhaps implied by the title of his talk, Badiou takes mathematics very seriously, perhaps more seriously than most mathematicians, and this distinguishes him from many other philosophers for whom mathematics is somewhat of an embarrassment. There are few fields more intellectually rarified than mathematics, philosophy, and poetry, and yet somehow Badiou treats each fairly in a way that reflects how broader disciplinary and cultural divisions between the humanities and technical fields may be reconciled. (This connects to some of my work on Philosophy of Computational Social Science)

I have written a bit recently about existentialism in design only to falter at the actual definition of existentialism. While it would I’m sure be incorrect to describe Badiou as an existentialist, there’s no doubt that he represents the great so-called Continental philosophical tradition, is familiar with Heidegger and Nietzsche, and so on. I see certain substantive resonances between Badiou and other existentialist writers, though I think to make the comparison now would be putting the cart before the horse.

Badiou’s position, in a nutshell, is like this:

Mathematics is a purely demonstrative form of writing and thinking. It communicates by proof, and has a special kind of audience to it. It is a science. In particular it is a science of all the possible forms of multiplicity, which is the same thing as saying as it is the science of all being, or ontology.

Poetry, on the other hand, is not about being but rather about becoming. “Becoming” for Badiou is subjective: the conscious subject encounters something new, experiences a change, sees an unrealized potential. These are events, and perhaps the greatest contribution of Badiou is his formulation and emphasis on the event as a category. In reference to earlier works, the event might be when through Hegelian dialectic a category is sublated. It could also perhaps correspond to when existence overcomes being in de Beauvoir’s ethics (hence the connection to existentialism I’m proposing). Good poetry, in Badiou’s thought, shows how the things we experience can break out of the structures that objectify them, turning the (subjectively perceived) impossible into a new reality.

Poetry is also, perhaps because it is connected to realizing the impossible but perhaps just because it’s nice to listen to (I’m unclear on Badiou’s position on this point) is “seductive”, encouraging psychological connections to the speaker (such as transference) whether or not it’s “true”. Classically, poetry meant epic poems and tragic theater. It could be cinema today.

Philosophy has the problem that it has historically tried to be both demonstrative, like mathematics, and seductive, like poetry. It’s this impurity or tension that defines it. Philosophers need to know mathematics because it is ontology, but have to go beyond mathematics because their mission is to create events in subjectively experienced reality, which is historically situated, and therefore not merely a matter of mathematical abstraction. Philosophers are in the business of creating new forms of subjectivity, which is not the same as creating a new form of being.

I’m fine with all this.

Galloway made some comments I’m somewhat skeptical of, though I may not have understood them since he seems to build mostly on Deleuze and Lacan, who are two intellectual sources I’ve never gotten into. But Galloway’s idea is to draw a connection between the “digital”, with all of its connections to computing technology, algorithms, the Internet, etc., with Badiou’s understanding of the mathematical, and to connect the “analog”, which is not discretized like the digital, to poetry. He suggested that Badiou’s sense of mathematics was arithmetic and excluded the geometric.

I take this interpretation of Galloway’s as clever, but incorrect and uncharitable. It’s clever because it co-opts a great thinker’s work into the sociopolitical agenda of trying to bolster the cultural capital of the humanities against the erosion of algorithmic curation and diminution relative to the fortunes of technology industries. This has been the agenda of professional humanists for a long time and it is annoying (to me) but I suppose necessary for the maintenance of the humanities, which are important.

However, I believe the interpretation is incorrect and uncharitable to Badiou because though Badiou’s paradigmatic example of mathematics is set theory, he seems to have a solid enough grasp of Kurt Godel’s main points to understand that mathematics includes the great variety of axiomatic systems and these, absolutely, indisputably, include geometry and real analysis and all the rest. The fact that logical proof is a discrete process which can be reduced to and from Boolean logic and automated in an electric circuit is, of course, the foundational science of computation that we owe to Turing, Church, Von Neumann, and others. It’s for these reasons that the potential of computation is so impressive and imposing: it potentially represents all possible forms of being. There are no limits to AI, at least none based on these mathematical foundations.

There were a number of good questions from the audience which led Badiou to clarify his position. The Real is relational, it is for a subject. This distinguishes it from Being, which is never relational (though of course, there are mathematical theories of relations, and this would seem to be a contradiction in Badiou’s thought?) He acknowledges that a difficult question is the part of Being in the the real.

Meanwhile, the Subject is always the result of an event.

Physics is a science of the existing form of the real, as opposed to the possible forms. Mathematics describes the possible forms of what exists. So empirical science can discover which mathematical form is the one that exists for us.

Another member of the audience asked about the impossibility of communism, which was on point because Badiou has at times defended communism or argued that the purpose of philosophy is to bring about communism. He made the point that one could not mathematically disprove the possibility of communism.

The real question, I may be so bold as to comment afterwards, is whether communism can exist in our reality. Suppose that economics is like physics in that it is a science of the real as it exists for us. What if economics shows that communism is impossible in our reality?

Though it wasn’t quite made explicitly, here is the subtle point of departure Badiou makes from what is otherwise conventionally unobjectionable. He would argue, I believe, that the purpose of philosophy is to create a new subjective reality where the impossible is made real, and he doesn’t see this process as necessarily bounded by, say, physics in its current manifestation. There is the possibiliity of a new event, and of seizing that event, through, for example, poetry. This is the article of faith in philosophy, and in poets, that has established them as the last bastion against dehumanization, objectification, reification, and the dangers of technique and technology since at least Heidegger’s Question Concerning Technology.

Which circles us back to the productive question: how would we design a technology that furthers this objective of creating new subjective realities, new events? This is what I’m after.

Existentialism in Design: Comparison with “Friendly AI” research

Turing Test [xkcd]

I made a few references to Friendly AI research in my last post on Existentialism in Design. I positioned existentialism as an ethical perspective that contrasts with the perspective taken by the Friendly AI research community, among others. This prompted a response by a pseudonymous commenter (in a sadly condescending way, I must say) who linked me to a a post, “Complexity of Value” on what I suppose you might call the elite rationalist forum Arbital. I’ll take this as an invitation to elaborate on how I think existentialism offers an alternative to the Friendly AI perspective of ethics in technology, and particularly the ethics of artificial intelligence.

The first and most significant point of departure between my work on this subject and Friendly AI research is that I emphatically don’t believe the most productive way to approach the problem of ethics in AI is to consider the problem of how to program a benign Superintelligence. This is for reasons I’ve written up in “Don’t Fear the Reaper: Refuting Bostrom’s Superintelligence Argument”, which sums up arguments made in several blog posts about Nick Bostrom’s book on the subject. This post goes beyond the argument in the paper to address further objections I’ve heard from Friendly AI and X-risk enthusiasts.

What superintelligence gives researchers is a simplified problem. Rather than deal with many of the inconvenient contingencies of humanity’s technically mediated existence, superintelligence makes these irrelevant in comparison to the limiting case where technology not only mediates, but dominates. The question asked by Friendly AI researchers is how an omnipotent computer should be programmed so that it creates a utopia and not a dystopia. It is precisely because the computer is omnipotent that it is capable of producing a utopia and is in danger of creating a dystopia.

If you don’t think superintelligences are likely (perhaps because you think there are limits to the ability of algorithms to improve themselves autonomously), then you get a world that looks a lot more like the one we have now. In our world, artificial intelligence has been incrementally advancing for maybe a century now, starting with the foundations of computing in mathematical logic and electrical engineering. It proceeds through theoretical and engineering advances in fits and starts, often through the application of technology to solve particular problems, such as natural language processing, robotic control, and recommendation systems. This is the world of “weak AI”, as opposed to “strong AI”.

It is also a world where AI is not the great source of human bounty or human disaster. Rather, it is a form of economic capital with disparate effects throughout the total population of humanity. It can be a source of inspiring serendipity, banal frustration, and humor.

Let me be more specific, using the post that I was linked to. In it, Eliezer Yudkowsky posits that a (presumeably superintelligent) AI will be directed to achieve something, which he calls “value”. The post outlines a “Complexity of Value” thesis. Roughly, this means that the things that we want AI to do cannot be easily compressed into a brief description. For an AI to not be very bad, it will need to either contain a lot of information about what people really want (more than can be easily described) or collect that information as it runs.

That sounds reasonable to me. There’s plenty of good reasons to think that even a single person’s valuations are complex, hard to articulate, and contingent on their circumstances. The values appropriate for a world dominating supercomputer could well be at least as complex.

But so what? Yudkowsky argues that this thesis, if true, has implications for other theoretical issues in superintelligence theory. But does it address any practical questions of artificial intelligence problem solving or design? That it is difficult to mathematically specify all of values or normativity, and that to attempt to do so one would need to have a lot of data about humanity in its particularity, is a point that has been apparent to ethical philosophy for a long time. It’s a surprise or perhaps disappointment only to those who must mathematize everything. Articulating this point in terms of Kolmogorov complexity does not particularly add to the insight so much as translate it into an idiom used by particular researchers.

Where am I departing from this with “Existentialism in Design”?

Rather than treat “value” as a wholly abstract metasyntactic variable representing the goals of a superintelligent, omniscient machine, I’m approaching the problem more practically. First, I’m limiting myself to big sociotechnical complexes wherein a large number of people have some portion of their interactions mediated by digital networks and data centers and, why not, smartphones and even the imminent dystopia of IoT devices. This may be setting my work up for obsolescence, but it also grounds the work in potential action. Since these practical problems rely on much of the same mathematical apparatus as the more far-reaching problems, there is a chance that a fundamental theorem may arise from even this applied work.

That restriction on hardware may seem banal; but it’s a particular philosophical question that I am interested in. The motivation for considering existentialist ethics in particular is that it suggests new kinds of problems that are relevant to ethics but which have not been considered carefully or solved.

As I outlined in a previous post, many ethical positions are framed either in terms of consequentialism, evaluating the utility of a variety of outcomes, or deontology, concerned with the consistency of behavior with more or less objectively construed duties. Consequentialism is attractive to superintelligence theorists because they imagine their AI’s to have to ability to cause any consequence. The critical question is how to give it a specification the leads to the best or adequate consequences for humanity. This is a hard problem, under their assumptions.

Deontology is, as far as I can tell, less interesting to superintelligence theorists. This may be because deontology tends to be an ethics of human behavior, and for superintelligence theorists human behavior is rendered virtually insignificant by superintelligent agency. But deontology is attractive as an ethics precisely because it is relevant to people’s actions. It is intended as a way of prescribing duties to a person like you and me.

With Existentialism in Design (a term I may go back and change in all these posts at some point; I’m not sure I love the phrase), I am trying to do something different.

I am trying to propose an agenda for creating a more specific goal function for a limited but still broad-reaching AI, assigning something to its ‘value’ variable, if you will. Because the power of the AI to bring about consequences is limited, its potential for success and failure is also more limited. Catastrophic and utopian outcomes are not particularly relevant; performance can be evaluated in a much more pedestrian way.

Moreover, the valuations internalized by the AI are not to be done in a directly consequentialist way. I have suggested that an AI could be programmed to maximize the meaningfulness of its choices for its users. This is introducing a new variable, one that is more semantically loaded than “value”, though perhaps just as complex and amorphous.

Particular to this variable, “meaningfulness”, is that it is a feature of the subjective experience of the user, or human interacting with the system. It is only secondarily or derivatively an objective state of the world that can be evaluated for utility. To unpack in into a technical specification, we will require a model (perhaps a provisional one) of the human condition and what makes life meaningful. This very well may include such things as the autonomy, or the ability to make one’s own choices.

I can anticipate some objections along the lines that what I am proposing still looks like a special case of more general AI ethics research. Is what I’m proposing really fundamentally any different than a consequentialist approach?

I will punt on this for now. I’m not sure of the answer, to be honest. I could see it going one of two different ways.

The first is that yes, what I’m proposing can be thought of as a narrow special case of a more broadly consequentialist approach to AI design. However, I would argue that the specificity matters because of the potency of existentialist moral theory. The project of specify the latter as a kind of utility function suitable for programming into an AI is in itself a difficult and interesting problem without it necessarily overturning the foundations of AI theory itself. It is worth pursuing at the very least as an exercise and beyond that as an ethical intervention.

The second case is that there may be something particular about existentialism that makes encoding it different from encoding a consequentialist utility function. I suspect, but leave to be shown, that this is the case. Why? Because existentialism (which I haven’t yet gone into much detail describing) is largely a philosophy about how we (individually, as beings thrown into existence) come to have values in the first place and what we do when those values or the absurdity of circumstances lead us to despair. Existentialism is really a kind of phenomenological metaethics in its own right, one that is quite fluid and resists encapsulation in a utility calculus. Most existentialists would argue that at the point where one externalizes one’s values as a utility function as opposed to living as them and through them, one has lost something precious. The kinds of things that existentialism derives ethical imperatives from, such as the relationship between one’s facticity and transcendence, or one’s will to grow in one’s potential and the inevitability of death, are not the kinds of things a (limited, realistic) AI can have much effect on. They are part of what has been perhaps quaintly called the human condition.

To even try to describe this research problem, one has to shift linguistic registers. The existentialist and AI research traditions developed in very divergent contexts. This is one reason to believe that their ideas are new to each other, and that a synthesis may be productive. In order to accomplish this, one needs a charitably considered, working understanding of existentialism. I will try to provide one in my next post in this series.

Existentialism in Design: Motivation

There has been a lot of recent work on the ethics of digital technology. This is a broad area of inquiry, but it includes such topics as:

  • The ethics of Internet research, including the Facebook emotional contagion study and the Encore anti-censorship study.
  • Fairness, accountability, and transparnecy in machine learning.
  • Algorithmic price-gauging.
  • Autonomous car trolley problems.
  • Ethical (Friendly?) AI research? This last one is maybe on the fringe…

If you’ve been reading this blog, you know I’m quite passionate about the intersection of philosophy and technology. I’m especially interested in how ethics can inform the design of digital technology, and how it can’t. My dissertation is exploring this problem in the privacy engineering literature.

I have a some dissatisfaction towards this field which I don’t expect to make it into my dissertation. One is that the privacy engineering literature and academic “ethics of digital technology” more broadly tends to be heavily informed by the law, in the sense of courts, legislatures, and states. This is motivated by the important consideration that technology, and especially technologists, should in a lot of cases be compliant with the law. As a practical matter, it certainly spares technologists the trouble of getting sued.

However, being compliant with the law is not precisely the same things as being ethical. There’s a long ethical tradition of civil disobedience (certain non-violent protest activities, for example) which is not strictly speaking legal though it has certainly had impact on what is considered legal later on. Meanwhile, the point has been made but maybe not often enough that legal language often looks like ethical language, but really shouldn’t be interpreted that way. This is a point made by Oliver Wendell Holmes Junior in his notable essay, “The Path of the Law”.

When the ethics of technology are not being framed in terms of legal requirements, they are often framed in terms of one of two prominent ethical frameworks. One framework is consequentialism: ethics is a matter of maximizing the beneficial consequences and minimizing the harmful consequences of ones actions. One variation of consequentialist ethics is utilitarianism, which attempts to solve ethical questions by reducing them to a calculus over “utility”, or benefit as it is experienced or accrued by individuals. A lot of economics takes this ethical stance. Another, less quantitative variation of consequentialist ethics is present in the research ethics principle that research should maximize benefits and minimize harms to participants.

The other major ethical framework used in discussions of ethics and technology is deontological ethics. These are ethics that are about rights, duties, and obligations. Justifying deontological ethics can be a little trickier than justifying consequentialist ethics. Frequently this is done by invoking social norms, as in the case of Nissenbaum’s contextual integrity theory. Another variation of a deontological theory of ethics is Habermas’s theory of transcendental pragmatics and legitimate norms developed through communicative action. In the ideal case, these norms become encoded into law, though it is rarely true that laws are ideal.

Consequentialist considerations probably make the world a better place in some aggregate sense. Deontological considerations probably maybe the world a fairer or at least more socially agreeable place, as in their modern formulations they tend to result from social truces or compromises. I’m quite glad that these frameworks are taken seriously by academic ethicists and by the law.

However, as I’ve said I find these discussions dissatisfying. This is because I find both consequentialist and deontological ethics to be missing something. They both rely on some foundational assumptions that I believe should be questioned in the spirit of true philosophical inquiry. A more thorough questioning of these assumptions, and tentative answers to them, can be found in existentialist philosophy. Existentialism, I would argue, has not had its due impact on contemporary discourse on ethics and technology, and especially on the questions surrounding ethical technical design. This is a situation I intend to one day remedy. Though Zach Weinersmith has already made a fantastic start:

“Self Driving Car Ethics”, by Weinersmith

SMBC: Autonomous vehicle ethics

What kinds of issues would be raised by existentialism in design? Let me try out a few examples of points made in contemporary ethics of technology discourse and a preliminary existentialist response to them.

Ethical Charge Existentialist Response
A superintelligent artificial intelligence could, if improperly designed, result in the destruction or impairment of all human life. This catastrophic risk must be avoided. (Bostrom, 2014) We are all going to die anyway. There is no catastrophic risk; there is only catastrophic certainty. We cannot make an artificial intelligence that prevents this outcome. We must instead design artificial intelligence that makes life meaningful despite its finitude.
Internet experiments must not direct the browsers of unwitting people to test the URLs of politically sensitive websites. Doing this may lead to those people being harmed for being accidentally associated with the sensitive material. Researchers should not harm people with their experiments. (Narayanan and Zevenbergen, 2015) To be held responsible by a state’s criminal justice system for the actions taken by ones browser, controlled remotely from America, is absurd. This absurdity, which pervades all life, is the real problem, not the suffering potentially caused by the experiment (because suffering in some form is inevitable, whether it is from painful circumstance or from ennui.) What’s most important is the exposure of this absurdity and the potential liberation from false moralistic dogmas that limit human potential.
Use of Big Data to sort individual people, for example in the case of algorithms used to choose among applicants for a job, may result in discrimination against historically disadvantaged and vulnerable groups. Care must be taken to tailor machine learning algorithms to adjust for the political protection of certain classes of people. (Barocas and Selbst, 2016) The egalitarian tendency in ethics which demands that the greatest should invest themselves in the well-being of the weakest is a kind of herd morality, motivated mainly by ressentiment of the disadvantaged who blame the powerful for their frustrations. This form of ethics, which is based on base emotions like pity and envy, is life-negating because it denies the most essential impulse of life: to overcome resistance and to become great. Rather than restrict Big Data’s ability to identify and augment greatness, it should be encouraged. The weak must be supported out of a spirit of generosity from the powerful, not from a curtailment of power.

As a first cut at existentialism’s response to ethical concerns about technology, it may appear that existentialism is more permissive about the use and design of technology than consequentialism and deontology. It is possible that this conclusion will be robust to further investigation. There is a sense in which existentialism may be the most natural philosophical stance for the technologist because a major theme in existentialist thought is the freedom to choose ones values and the importance of overcoming the limitations on ones power and freedom. I’ve argued before that Simone de Beauvoir, who is perhaps the most clear-minded of the existentialists, has the greatest philosophy of science because it respects this purpose of scientific research. There is a vivacity to existentialism that does not sweat the small stuff and thinks big while at the same time acknowledging that suffering and death are inevitable facts of life.

On the other hand, existentialism is a morally demanding line of inquiry precisely because it does not use either easy metaethical heuristics (such as consequentialism or deontology) or the bald realities of the human condition as a stopgap. It demands that we tackle all the hard questions, sometimes acknowledging that they are answerable or answerable only in the negative, and muddle on despite the hardest truths. Its aim is to provide a truer, better morality than the alternatives.

Perhaps this is best illustrated by some questions implied by my earlier “existentialist responses” that address the currently nonexistent field of existentialism in design. These are questions I haven’t yet heard asked by scholars at the intersection of ethics and technology.

  • How could we design an artificial intelligence (or, to make it simpler, a recommendation system) that makes the most meaningful choices for its users?
  • What sort of Internet intervention would be most liberatory for the people affected by it?
  • What technology can best promote generosity from the world’s greatest people as a celebration of power and life?

These are different questions from any that you read about in the news or in the ethical scholarship. I believe they are nevertheless important ones, maybe more important than the ethical questions that are more typically asked. The theoretical frameworks employed by most ethicists make assumptions that obscure what everybody already knows about the distribution of power and its abuses, the inevitability of suffering and death, life’s absurdity and especially the absurdity if moralizing sentiment in the face of the cruelty of reality, and so on. At best, these ethical discussions inform the interpretation and creation of law, but law is not the same as morality and to confuse the two robs morality of what is perhaps most essential component, which is that is grounded meaningfully in the experience of the subject.

In future posts (and, ideally, eventually in a paper derived from those posts), I hope to flesh out more concretely what existentialism in design might look like.

References

Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact.

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. OUP Oxford.

Narayanan, A., & Zevenbergen, B. (2015). No Encore for Encore? Ethical questions for web-based censorship measurement.

Weinersmith, Z. “Self Driving Car Ethics”. Saturday Morning Breakfast Cereal.