Tag: rationalism

Existentialism in Design: Comparison with “Friendly AI” research

Turing Test [xkcd]

I made a few references to Friendly AI research in my last post on Existentialism in Design. I positioned existentialism as an ethical perspective that contrasts with the perspective taken by the Friendly AI research community, among others. This prompted a response by a pseudonymous commenter (in a sadly condescending way, I must say) who linked me to a a post, “Complexity of Value” on what I suppose you might call the elite rationalist forum Arbital. I’ll take this as an invitation to elaborate on how I think existentialism offers an alternative to the Friendly AI perspective of ethics in technology, and particularly the ethics of artificial intelligence.

The first and most significant point of departure between my work on this subject and Friendly AI research is that I emphatically don’t believe the most productive way to approach the problem of ethics in AI is to consider the problem of how to program a benign Superintelligence. This is for reasons I’ve written up in “Don’t Fear the Reaper: Refuting Bostrom’s Superintelligence Argument”, which sums up arguments made in several blog posts about Nick Bostrom’s book on the subject. This post goes beyond the argument in the paper to address further objections I’ve heard from Friendly AI and X-risk enthusiasts.

What superintelligence gives researchers is a simplified problem. Rather than deal with many of the inconvenient contingencies of humanity’s technically mediated existence, superintelligence makes these irrelevant in comparison to the limiting case where technology not only mediates, but dominates. The question asked by Friendly AI researchers is how an omnipotent computer should be programmed so that it creates a utopia and not a dystopia. It is precisely because the computer is omnipotent that it is capable of producing a utopia and is in danger of creating a dystopia.

If you don’t think superintelligences are likely (perhaps because you think there are limits to the ability of algorithms to improve themselves autonomously), then you get a world that looks a lot more like the one we have now. In our world, artificial intelligence has been incrementally advancing for maybe a century now, starting with the foundations of computing in mathematical logic and electrical engineering. It proceeds through theoretical and engineering advances in fits and starts, often through the application of technology to solve particular problems, such as natural language processing, robotic control, and recommendation systems. This is the world of “weak AI”, as opposed to “strong AI”.

It is also a world where AI is not the great source of human bounty or human disaster. Rather, it is a form of economic capital with disparate effects throughout the total population of humanity. It can be a source of inspiring serendipity, banal frustration, and humor.

Let me be more specific, using the post that I was linked to. In it, Eliezer Yudkowsky posits that a (presumeably superintelligent) AI will be directed to achieve something, which he calls “value”. The post outlines a “Complexity of Value” thesis. Roughly, this means that the things that we want AI to do cannot be easily compressed into a brief description. For an AI to not be very bad, it will need to either contain a lot of information about what people really want (more than can be easily described) or collect that information as it runs.

That sounds reasonable to me. There’s plenty of good reasons to think that even a single person’s valuations are complex, hard to articulate, and contingent on their circumstances. The values appropriate for a world dominating supercomputer could well be at least as complex.

But so what? Yudkowsky argues that this thesis, if true, has implications for other theoretical issues in superintelligence theory. But does it address any practical questions of artificial intelligence problem solving or design? That it is difficult to mathematically specify all of values or normativity, and that to attempt to do so one would need to have a lot of data about humanity in its particularity, is a point that has been apparent to ethical philosophy for a long time. It’s a surprise or perhaps disappointment only to those who must mathematize everything. Articulating this point in terms of Kolmogorov complexity does not particularly add to the insight so much as translate it into an idiom used by particular researchers.

Where am I departing from this with “Existentialism in Design”?

Rather than treat “value” as a wholly abstract metasyntactic variable representing the goals of a superintelligent, omniscient machine, I’m approaching the problem more practically. First, I’m limiting myself to big sociotechnical complexes wherein a large number of people have some portion of their interactions mediated by digital networks and data centers and, why not, smartphones and even the imminent dystopia of IoT devices. This may be setting my work up for obsolescence, but it also grounds the work in potential action. Since these practical problems rely on much of the same mathematical apparatus as the more far-reaching problems, there is a chance that a fundamental theorem may arise from even this applied work.

That restriction on hardware may seem banal; but it’s a particular philosophical question that I am interested in. The motivation for considering existentialist ethics in particular is that it suggests new kinds of problems that are relevant to ethics but which have not been considered carefully or solved.

As I outlined in a previous post, many ethical positions are framed either in terms of consequentialism, evaluating the utility of a variety of outcomes, or deontology, concerned with the consistency of behavior with more or less objectively construed duties. Consequentialism is attractive to superintelligence theorists because they imagine their AI’s to have to ability to cause any consequence. The critical question is how to give it a specification the leads to the best or adequate consequences for humanity. This is a hard problem, under their assumptions.

Deontology is, as far as I can tell, less interesting to superintelligence theorists. This may be because deontology tends to be an ethics of human behavior, and for superintelligence theorists human behavior is rendered virtually insignificant by superintelligent agency. But deontology is attractive as an ethics precisely because it is relevant to people’s actions. It is intended as a way of prescribing duties to a person like you and me.

With Existentialism in Design (a term I may go back and change in all these posts at some point; I’m not sure I love the phrase), I am trying to do something different.

I am trying to propose an agenda for creating a more specific goal function for a limited but still broad-reaching AI, assigning something to its ‘value’ variable, if you will. Because the power of the AI to bring about consequences is limited, its potential for success and failure is also more limited. Catastrophic and utopian outcomes are not particularly relevant; performance can be evaluated in a much more pedestrian way.

Moreover, the valuations internalized by the AI are not to be done in a directly consequentialist way. I have suggested that an AI could be programmed to maximize the meaningfulness of its choices for its users. This is introducing a new variable, one that is more semantically loaded than “value”, though perhaps just as complex and amorphous.

Particular to this variable, “meaningfulness”, is that it is a feature of the subjective experience of the user, or human interacting with the system. It is only secondarily or derivatively an objective state of the world that can be evaluated for utility. To unpack in into a technical specification, we will require a model (perhaps a provisional one) of the human condition and what makes life meaningful. This very well may include such things as the autonomy, or the ability to make one’s own choices.

I can anticipate some objections along the lines that what I am proposing still looks like a special case of more general AI ethics research. Is what I’m proposing really fundamentally any different than a consequentialist approach?

I will punt on this for now. I’m not sure of the answer, to be honest. I could see it going one of two different ways.

The first is that yes, what I’m proposing can be thought of as a narrow special case of a more broadly consequentialist approach to AI design. However, I would argue that the specificity matters because of the potency of existentialist moral theory. The project of specify the latter as a kind of utility function suitable for programming into an AI is in itself a difficult and interesting problem without it necessarily overturning the foundations of AI theory itself. It is worth pursuing at the very least as an exercise and beyond that as an ethical intervention.

The second case is that there may be something particular about existentialism that makes encoding it different from encoding a consequentialist utility function. I suspect, but leave to be shown, that this is the case. Why? Because existentialism (which I haven’t yet gone into much detail describing) is largely a philosophy about how we (individually, as beings thrown into existence) come to have values in the first place and what we do when those values or the absurdity of circumstances lead us to despair. Existentialism is really a kind of phenomenological metaethics in its own right, one that is quite fluid and resists encapsulation in a utility calculus. Most existentialists would argue that at the point where one externalizes one’s values as a utility function as opposed to living as them and through them, one has lost something precious. The kinds of things that existentialism derives ethical imperatives from, such as the relationship between one’s facticity and transcendence, or one’s will to grow in one’s potential and the inevitability of death, are not the kinds of things a (limited, realistic) AI can have much effect on. They are part of what has been perhaps quaintly called the human condition.

To even try to describe this research problem, one has to shift linguistic registers. The existentialist and AI research traditions developed in very divergent contexts. This is one reason to believe that their ideas are new to each other, and that a synthesis may be productive. In order to accomplish this, one needs a charitably considered, working understanding of existentialism. I will try to provide one in my next post in this series.

Reflecting on “Technoscience and Expressionism” by @FractalOntology

I’ve come across Joseph Weissman’s (@FractalOntology) “Technoscience and Expressionism” and am grateful for it, as its filled me in on a philosophical position that I missed the first time around, accelerationism. I’m not a Deleuzian and prefer my analytic texts to plod, so I can’t say I understood all of the essay. On the other hand, I gather the angle of this kind of philosophizing is intentionally psychotherapeutic and hence serves and artistic/literary function rather than one that explicitly guides praxis.

I am curious about the essay because I would like to see a thorough analysis of the political possibilities for the 21st century that gets past 20th century tropes. The passions of journalistic and intellectual debate have an atavistic tendency due to a lack of imagination that I would like to avoid in my own life and work.

Accelerationism looks new. It was pronounced in a manifesto, which is a good start.

Here is a quote from it:

Democracy cannot be defined simply by its means — not via voting, discussion, or general assemblies. Real democracy must be defined by its goal — collective self-​mastery. This is a project which must align politics with the legacy of the Enlightenment, to the extent that it is only through harnessing our ability to understand ourselves and our world better (our social, technical, economic, psychological world) that we can come to rule ourselves. We need to posit a collectively controlled legitimate vertical authority in addition to distributed horizontal forms of sociality, to avoid becoming the slaves of either a tyrannical totalitarian centralism or a capricious emergent order beyond our control. The command of The Plan must be married to the improvised order of The Network.

Hell yeah, the Enlightenment! Sign me up!

The manifesto calls for an end to the left’s emphasis on local action, transparency, and direct democracy. Rather, it calls for a muscular hegemonic left that fully employs and deploys “technoscience”.

It is good to be able to name this political tendency and distinguish it from other left tendencies. It is also good to distinguish it from “right accelerationism”, which Weissman identifies with billionaires who want to create exurb communities.

A left-accelerationist impulse is today playing out dramatically against a right-accelerationist one. And the right-accelerationists are about as dangerous as you may imagine. With silicon valley VCs, and libertarian technologists more generally reading Nick Land on geopolitical fragmentation, the reception or at least receptivity to hard-right accelerants seems problematically open (and the recent $2M campaign proposing the segmentation of California into six microstates seems to provide some evidence for this.) Billionaires consuming hard-right accelerationist materials arguing for hyper-secessionism undoubtedly amounts to a critically dangerous situation. I suspect that the right-accelerationist materials, perspectives, affect, energy expresses a similar shadow, if it is not partly what is catalyzing the resurgence of micro-fascisms elsewhere (and macro ones as well — perhaps most significant to my mind here is the overlap of right-acceleration with white nationalism, and more generally what is deplorably and disingenuously called “race realism” — and is of course simply racism; consider Marine le Pen’s fascist front, which recently won 25% of the seats in the French parliament, UKIP’s resurgence in Great Britain; while we may not hear accelerationist allegiances and watchwords explicitly, the political implications and continuity is at the very least somewhat unsettling…)

There is an unfortunate conflation of several different points of view here. It is too easy to associate racism, wealth, and libertarianism as these are the nightmares of the left’s political imagination. If ideological writing is therapeutic, a way of articulating ones dreams, then this is entirely appropriate with a caveat. The caveat being that every nightmare is a creation of ones own psychology more so than a reflection of the real world.

The same elisions are made by Sam Frank in his recent article thematizing Silicon Valley libertarianism, friendly artificial intelligence research, and contemporary rationalism as a self-help technique. There are interesting organizational ties between these institutions that are validly worth investigating but it would be lazy to collapse vast swathes of the intellectual spectrum into binaries.

In March 2013 I wrote about the Bay Area Rationalists:

There is a good story here, somewhere. If I were a journalist, I would get in on this and publish something about it, just because there is such a great opportunity for sensationalist exploitation.

I would like to say “I called it”–Sam Frank has recently written just such a sensationalist, exploitative piece in Harper’s Magazine. It is thoroughly enjoyable and I wouldn’t say it’s inaccurate. But I don’t think this is the best way to get to know these people. A better one is to attend a CFAR workshop. It used to be that you could avoid the fee with a promise to volunteer, but that there was a money-back guarantee which extended to ones promise to volunteer. If that’s still the case, then one can essentially attend for free.

Another way to engage this community intellectually, which I would encourage the left accelerationists to do because it’s interesting, is to start participating on LessWrong. For some reason this community is not subject to ideological raids like so many other community platforms. I think it could stand for an influx of Deleuze.

Ultimately the left/right divide comes down to a question of distribution of resources and/or surplus. Left accelerationist tactics appear from here to be a more viable way of seizing resources than direct democracy. However, the question is whether accelerationist tactics inevitably result in inequalities that create control structures of the kind originally objected to. In other words, this may simply be politics as usual and nothing radical at all.

So there’s an intersection between these considerations (accelerationist vs. … decelerationism? Capital accumulation vs. capital redistribution?) and the question of decentralization of decision-making process (is that the managerialism vs. multistakeholderism divide?) whose logic is unclear to me. I want to know which affinities are necessary and which are merely contingent.

Bay Area Rationalists

There is an interesting thing happening. Let me just try to lay down some facts.

There are a number of organizations in the Bay Area right now up to related things.

  • Machine Intelligence Research Institute (MIRI). Researches the implications of machine intelligence on the world, especially the possibility of super-human general intelligences. Recently changed their name from the Singularity Institute due to the meaninglessness of the term Singularity. I interviewed their Executive Director (CEO?), Luke Meuhlhauser, a while back. (I followed up on some of the reasoning there with him here).
  • Center for Applied Rationality (CFAR). Runs workshops training people in rationality, applying cognitive science to life choices. Trying to transition from appearing to pitch a “world-view” to teaching a “martial art” (I’ve sat in on a couple of their meetings). They aim to grow out a large network of people practicing these skills, because they think it will make the world a better place.
  • Leverage Research. A think-tank with an elaborate plan to save the world. Their research puts a lot of emphasis on how to design and market ideologies. I’ve been told that they recently moved to the Bay Area to be closer to CFAR.

Some things seem to connect these groups. First, socially, they all seem to know each other (I just went to a party where a lot of members of each group were represented.) Second, the organizations seem to get the majority of their funding from roughly the same people–Peter Thiel, Luke Nosek, and Jaan Tallinn, all successful tech entrepreneurs turned investors with interest in stuff like transhumanism, the Singularity, and advancing rationality in society. They seem to be employing a considerable number of people to perform research on topics normally ignored in academia and spread an ideology and/or set of epistemic practices. Third, there seems to be a general social affiliation with LessWrong.com; I gather a lot of the members of this community originally networked on that site.

There’s a lot that’s interesting about what’s going on here. A network of startups, research institutions, and training/networking organizations is forming around a cluster of ideas: the psychological and technical advancement of humanity, being smarter, making machines smarter, being rational or making machines to be rational for us. It is as far as I can tell largely off the radar of “mainstream” academic thinking. As a network, it seems concerned with growing to gather into itself effective and connected people. But it’s not drawing from many established bases of effective and connected people (the academic establishment, the government establishment, the finance establishment, “old boys networks” per se, etc.) but rather is growing its own base of enthusiasts.

I’ve had a lot of conversations with people in this community now. Some, but not all, would compare what they are doing to the starting of a religion. I think that’s pretty accurate based on what I’ve seen so far. Where I’m from, we’ve always talked about Singularitarianism as “eschatology for nerds”. But here we have all these ideas–the Singularity, “catastrophic risk”, the intellectual and ethical demands of “science”, the potential of immortality through transhumanist medicine, etc.–really motivating people to get together, form a community, advance certain practices and investigations, and proselytize.

I guess what I’m saying is: I don’t think it’s just a joke any more. There is actually a religion starting up around this. Granted, I’m in California now and as far as I can tell there are like sixty religions out here I’ve never heard of (I chalk it up to the lack of population density and suburban sprawl). But this one has some monetary and intellectual umph behind it.

Personally, I find this whole gestalt both attractive and concerning. As you might imagine, diversity is not this group’s strong suit. And its intellectual milieu reflects its isolation from the academic mainstream in that it lacks the kind of checks and balances afforded by multidisciplinary politics. Rather, it appears to have more or less declared the superiority of its methodological and ideological assumptions to its satisfaction and convinced itself that it’s ahead of the game. Maybe that’s true, but in my own experience, that’s not how it really works. (I used to share most of the tenets of this rationalist ideology, but have deliberately exposed myself to a lot of other perspectives since then [I think that taking the Bayesian perspective seriously necessitates taking the search for new information very seriously]. Turns out I used to be wrong about a lot of things.)

So if I were to make a prediction, it would go like this. One of these things is going to happen:

  • This group is going to grow to become a powerful but insulated elite with an expanded network and increasingly esoteric practices. An orthodox cabal seizes power where they are able, and isolates itself into certain functional roles within society with a very high standard of living.
  • In order to remain consistent with its own extraordinarily high epistemic standards, this network starts to assimilate other perspectives and points of view in an inclusive way. In the process, it discovers humility, starts to adapt proactively and in a decentralized way, losing its coherence but perhaps becomes a general influence on the preexisting societal institutions rather than a new one.
  • Hybrid models. Priesthood/lay practitioners. Or denominational schism.

There is a good story here, somewhere. If I were a journalist, I would get in on this and publish something about it, just because there is such a great opportunity for sensationalist exploitation.