Digifesto

Tag: philosophy

naturalized ethics and natural law

One thing that’s become clear to me lately is that I now believe that ethics can be naturalized. I also believe that there is in fact a form of ‘natural law’. By this I mean that that there are rights and values that are inherent to human nature. Real legal systems can either lie up to natural law, or not.

This is not the only position that it’s possible to take on these topics.

One different position, that I do not have, is that ethics depends on the supernatural. I bring this up because religion is once again very politically salient in the United States. Abrahamic religions ground ethics and morality in a covenant between humans and a supernatural God. Divine power authorizes the ethical code. In some cases this is explicitly stated law, in others it is a set of principles. Beyond divine articulation, this position maintains that ethics are supernaturally enforced through reward and punishment. I don’t think this is how things work.

Another position I don’t have is that there is that ethics are opinion or cultural construction, full stop. Certainly there’s a wide diversity of opinions on ethics and cultural attitudes. Legal systems vary from place to place. This diversity is sometimes used as evidence that there aren’t truths about ethics or law to be had. But that is, taken alone, a silly argument. Lots of people and legal systems are simply wrong. Moreover, moral and ethical truths can take contingency and variety into account, and they probably should. It can be true that laws should be well-adapted to some otherwise arbitrary social expectations or material conditions. And so on.

There has historically been hemming and hawing about the fact/value dichotomy. If there’s no supernatural guarantor of ethics, is the natural world sufficient to produce values beyond our animal passions? This increasingly feels like an argument from a previous century. Adequate solutions to this problem have been offered by philosophers over time. They tend to involve some form of rational or reflective process, and aggregation over the needs and opinions of people in heterogeneous circumstances. Habermas comes to mind as a one of the synthesizers of a new definition of naturalized law and ethics.

For some reason, I’ve encountered so much resistance to this form of ethical or moral realism over the years. But looking back on it, I can’t recall a convincing argument for it. I can recall many claims that the idea of ethical and moral truth are somehow politically dangerous, but that this not the same thing.

There is something teleological about most viable definitions of naturalized ethics and natural law. They are would would hypothetically be decided on by interlocutors in an idealized but not yet realized circumstance. A corollary to my position is that ethical and moral facts exist, but many have not yet been discovered. A scientific process is needed to find them. This process is necessarily a social scientific process, since ethical and moral truths are truths about social systems and how they work.

It would be very fortunate, I think, if some academic department, discipline, or research institution were to take up my position. At present, we seem to have a few different political positions available to us in the United States:

  • A conservative rejection of the university of being insufficiently moral because of its abandonment of God
  • A postmodern rejection of ethical and moral truths that relativizes everything
  • A positivist rejection of normativity as the object of social science because of the fact/value dichotomy
  • Politicized disciplines that presume a political agenda and then perform research aligned with that agenda
  • Explicitly normative disciplines that are discursive and humanistic but not inclined towards rigorous analysis of the salient natural facts

None of these is conducive to a scientific study of what ethics and morals should be. There are exceptions, of course, and many brilliant people in many corners who make great contributions towards this goal. But they seem scattered at the margins of the various disciplines, rather than consolidated into a thriving body of intellect. At a moment where we see profound improvements (yes, improvements!) in our capacity for reasoning and scientific exploration, why hasn’t something like this emerged? It would be an improvement over the status quo.

Political theories and AI

Through a few new emerging projects and opportunities, I’ve had reason to circle back to the topic of Artificial Intelligence and ethics. I wanted to jot down a few notes as some recent reading and conversations have been clarifying some ideas here.

In my work with Jake Goldenfein on this topic (published 2021), we framed the ethical problem of AI in terms of its challenge to liberalism, which we characterize in terms of individual rights (namely, property and privacy rights), a theory of why the free public market makes the guarantees of these rights sufficient for many social goods, and a more recent progressive or egalitarian tendency. We then discuss how AI technologies challenge liberalism and require us to think about post-liberal configurations of society and computation.

A natural reaction to this paper, especially given the political climate in the United States, is “aren’t the alternatives to liberalism even worse?” and it’s true that we do not in that paper outline an alternative to liberalism which a world with AI might aspire to.

John Mearsheimer’s The Great Delusion: Liberal Dreams and International Realities (2018) is a clearly written treatise on political theory. Mearsheimer rose to infamy in 2022 after the Russian invasion of Ukraine because of widely circulated videos of a lecture in 2015 in which he argued that the fault for Russia’s invasion of Crimea in 2014 was due to U.S. foreign policy. It is because of that infamy that I’ve decided to read The Great Delusion, which was a Financial Times Best Book of 2018. The Financial Times editorials have since turned on Mearsheimer. We’ll see what they say about him in another four years. However politically unpopular he may be, I found his points interesting and have decided to look at his more scholarly work. I have not been disappointed, and find that he clearly articulates political philosophy I will use these articulations. I won’t analyze his international relations theory here.

Putting Mearsheimer’s international relations theories entirely aside for now, I’ve been pleased to find The Great Delusion to be a thorough treatise on political theory, and it goes to lengths in Chapter 3 to describe liberalism as a political theory (which will be its target). Mearsheimer distinguished between four different political ideologies, citing many of their key intellectual proponents.

  • Modus vivendi liberalism. (Locke, Smith, Hayek) A theory committed to individual negative rights, such as private property and privacy, against the impositions by the state. The state should be minimal, a “night watchman”. This can involve skepticism about the ability of reason to achieve consensus about the nature of the good life; political toleration of differences is implied by the guarantee of negative rights.
  • Progressive liberalism. (Rawls) A theory committed to individual rights, including both negative rights and positive rights, which can be in tension. An example positive right is equal opportunity, which requires interventions by the state in order to guarantee. So the state must play a stronger role. Progressive liberalism involves more faith in reason to achieve consensus about the good life, as progressivism is a positive moral view imposed on others.
  • Utilitarianism. (Bentham, Mill) A theory committed to the greatest happiness for the greatest number. Not committed to individual rights, and therefore not a liberalism per se. Utilitarian analysis can argue for tradeoffs of rights to achieve greater happiness, and is collectivist, not in individualist, in the sense that it is concerned with utility in aggregate.
  • Liberal idealism. (Hobson, Dewey) A theory committed to the realization of an ideal society as an organic unity of functioning subsystem. Not committed to individual rights primarily, so not a liberalism, though individual rights can be justified on ideal grounds. Influenced by Hegelian views about the unity of the state. Sometimes connected to a positive view of nationalism.

This is a highly useful breakdown of ideas, which we can bring back to discussions of AI ethics.

Jake Goldenfein and I wrote about ‘liberalism’ in a way that, I’m glad to say, is consistent with Mearsheimer. We too identity right- and left- wing strands of liberalism. I believe our argument about AI’s challenge to liberal assumptions still holds water.

Utilitarianism is the foundation of one of the most prominent versions of AI ethics today: Effective Altruism. Much has been written about Effective Altruism and its relationship to AI Safety research. I have expressed some thoughts. Suffice it to say here that there is a utilitarian argument that ‘ethics’ should be about prioritizing the prevention of existential risk to humanity, because existential catastrophe would prevent the high-utility outcome of humanity-as-joyous-galaxy-colonizers. AI is seen, for various reasons, to be a potential source of catastrophic risk, and so AI ethics is about preventing these outcomes. Not everybody agrees with this view.

For now, it’s worth mentioning that there is a connection between liberalism and utilitarianism through theories of economics. While some liberals are committed to individual rights for their own sake, or because of negative views about the possibility of rational agreement about more positive political claims, others have argued that negative rights and lack of government intervention lead to better collective outcomes. Neoclassical economics has produced theories and ‘proofs’ to this effect, which rely on mathematical utility theory, which is a successor to philosophical utilitarianism in some respects.

It is also the case that a great deal of AI technology and technical practice is oriented around the vaguely utilitarian goals of ‘utility maximization’, though this is more about the mathematical operationalization of instrumental reason and less about a social commitment to utility as a political goal. AI practice and neoclassical economics are quite aligned in this way. If I were to put the point precisely, I’d say that the reality of AI, by exposing bounded rationality and its role in society, shows that arguments that negative rights are sufficient for utility-maximizing outcomes are naive, and so are a disappointment for liberals.

I was pleased that Mearsheimer brought up what he calls ‘liberal idealism’ in his book, despite it being perhaps a digression from his broader points. I have wondered how to place my own work, which draws heavily on Helen Nissenbaum’s theory of Contextual Integrity (CI), which is heavily influenced by the work of Michael Walzer. CI is based on a view of a society composed of separable spheres, which distinct functions and internally meaningful social goods, which should not be directly exchanged or compared. Walzer has been called a communitarian. I suggest that CI might be best seen as a variation of liberal idealism, in that it orients ethics towards a view of society as an idealized organic unity.

If the present reality of AI is so disappointing, then we must try to imagine a better ideal, and work our way towards it. I’ve found myself reading more and more work, such as by Felix Adler and Alain Badiou, that advocate for the need for an ideal model of society. What we currently are missing is a good computational model of such a society which could do for idealism what neoclassical economics did for liberalism. Which is, namely, to create a blueprint for a policy and science of its realization. If we were to apply AI to the problem of ethics, it would be good to use it this way.

metaphysics and politics

In almost any contemporary discussion of politics, today’s experts will tell you that metaphysics is irrelevant.

This is because we are discouraged today from taking a truly totalizing perspective–meaning, a perspective that attempts to comprehend the totality of what’s going on.

Academic work on politics is specialized. It focuses on a specific phenomenon, or issue, or site. This is partly due to the limits of what it is possible to work on responsibly. It is also partly due to the limitations of agency. A grander view of politics isn’t useful for any particular agent; they need only the perspective that best serves them. Blind spots are necessary for agency.

But universalist metaphysics is important for politics precisely because if there is a telos to politics, it is peace, and peace is a condition of the totality.

And while a situated agent may have no need for metaphysics because they are content with the ontology that suits them, situated agents cannot alone make any guarantees of peace.

In order for an agent to act effectively in the interest of total societal conditions, they require an ontology which is not confined by their situation, which will encode those habits of thought necessary for maintaining their situation as such.

What motivates the study of metaphysics then? A motivation is that it provides one with freedom from ones situation.

This freedom is a political accomplishment, and it also has political effects.

Protected: I study privacy now

This content is password-protected. To view it, please enter the password below.

Ethnography, philosophy, and data anonymization

The other day at BIDS I was working at my laptop when a rather wizardly looking man in a bicycle helmet asked me when The Hacker Within would be meeting. I recognized him from a chance conversation in an elevator after Anca Dragan’s ICBS talk the previous week. We had in that brief moment connected over the fact that none of the bearded men in the elevator had remembered to press the button for the ground floor. We had all been staring off into space before a young programmer with a thin mustache pointed out our error.

Engaging this amicable fellow, whom I will leave anonymous, the conversation turned naturally towards principles for life. I forget how we got onto the topic, but what I took away from the conversation was his advice: “Don’t turn your passion into your job. That’s like turning your lover into a wh***.”

Scholars in the School of Information are sometimes disparaging of the Data-Information-Knowledge-Wisdom hierarchy. Scholars, I’ve discovered, are frequently disparaging of ideas that are useful, intuitive, and pertinent to action. One cannot continue to play the Glass Bead Game if it has already been won any more than one can continue to be entertained by Tic Tac Toe once one has grasped its ineluctable logic.

We might wonder, as did Horkheimer, when the search and love of wisdom ceased to be the purpose of education. It may have come during the turn when philosophy was determined to be irrelevant, speculative or ungrounded. This perhaps coincided, in the United States, with McCarthyism. This is a question for the historians.

What is clear now is that philosophy per se is not longer considered relevant to scientific inquiry.

An ethnographer I know (who I will leave anonymous) told me the other day that the goal of Science and Technology Studies is to answer questions from philosophy of science with empirical observation. An admirable motivation for this is that philosophy of science should be grounded in the true practice of science, not in idle speculation about it. The ethnographic methods, through which observational social data is collected and then compellingly articulated, provide a kind of persuasiveness that for many far surpasses the persuasiveness of a priori logical argument, let alone authority.

And yet the authority of ethnographic writing depends always on the socially constructed role of the ethnographer, much like the authority of the physicist depends on their socially constructed role as physicists. I’d even argue that the dependence of ethnographic authority on social construction is greater than that of other kinds of scientific authority, as ethnography is so quintessentially an embedded social practice. A physicist or chemist or biologist at least in principle has nature to push back on their claims; a renegade natural scientist can as a last resort claim their authority through provision of a bomb or a cure. The mathematician or software engineer can test and verify their work through procedure. The ethnographer does not have these opportunities. Their writing will never be enough to convey the entirety of their experience. It is always partial evidence, a gesture at the unwritten.

This is not an accidental part of the ethnographic method. The practice of data anonymization, necessitated by the IRB and ethics, puts limitations on what can be said. These limitations are essential for building and maintaining the relationships of trust on which ethnographic data collection depends. The experiences of the ethnographer must always go far beyond what has been regulated as valid procedure. The information they have collected illicitly will, if they are skilled and wise, inform their judgment of what to write and what to leave out. The ethnographic text contains many layers of subtext that will be unknown to most readers. This is by design.

The philosophical text, in contrast, contains even less observational data. The text is abstracted from context. Only the logic is explicit. A naive reader will assume, then, that philosophy is a practice of logic chopping.

This is incorrect. My friend the ethnographer was correct: that ethnography is a way of answering philosophical questions empirically, through experience. However, what he missed is that philosophy is also a way of answering philosophical questions through experience. Just as in ethnographic writing, experience necessarily shapes the philosophical text. What is included, what is left out, what constellation in the cosmos of ideas is traced by the logic of the argument–these will be informed by experience, even if that experience is absent from the text itself.

One wonders: thus unhinged from empirical argument, how does a philosophical text become authoritative?

I’d offer the answer: it doesn’t. A philosophical text does not claim authority. That has been its method since Socrates.

Arendt on social science

Despite my first (perhaps kneejerk) reaction to Arendt’s The Human Condition, as I read further I am finding it one of the most profoundly insightful books I’ve ever read.

It is difficult to summarize: not because it is written badly, but because it is written well. I feel every paragraph has real substance to it.

Here’s an example: Arendt’s take on the modern social sciences:

To gauge the extent of society’s victory in the modern age, its early substitution of behavior for action and its eventual substitution of bureaucracy, the rule of nobody, for personal rulership, it may be well to recall that its initial science of economics, which substitutes patterns of behavior only in this rather limited field of human activity, was finally followed by the all-comprehensive pretension of the social sciences which, as “behavioral sciences,” aim to reduce man as a whole, in all his activities, to the level of a conditioned and behaving animal. If economics is the science of society in its early stages, when it could impose its rules of behavior only on sections of the population and on parts of their activities, the rise of the “behavioral sciences” indicates clearly the final stage of this development, when mass society has devoured all strata of the nation and “social behavior” has become the standard for all regions of life.

To understand this paragraph, one has to know what Arendt means by society. She introduces the idea of society in contrast to the Ancient Greek polis, which is the sphere of life in Antiquity where the head of a household could meet with other heads of households to discuss public matters. Importantly for Arendt, all concerns relating to the basic maintenance and furthering of life–food, shelter, reproduction, etc.–were part of the private domain, not the polis. Participation in public affairs was for those who were otherwise self-sufficient. In their freedom, they would compete to outdo each other in acts and words that would resonate beyond their lifetime, deeds, through which they could aspire to immortality.

Society, in contrast, is what happens when the mass of people begin to organize themselves as if they were part of one household. The conditions of maintaining life are public. In modern society, people are defined by their job; even being the ruler is just another job. Deviation from ones role in society in an attempt to make a lasting change–deeds–are considered disruptive, and so are rejected by the norms of society.

From here, we get Arendt’s critique of the social sciences, which is essentially this: that is only possible to have a social science that finds regularities of people’s behavior when their behavior has been regularized by society. So the social sciences are not discovering a truth about people en masse that was not known before. The social sciences aren’t discovering things about people. They are rather reflecting the society as it is. The more that the masses are effectively ‘socialized’, the more pervasive a generalizing social science can be, because only under those conditions are there regularities there to be captured as knowledge and taught.

formalizing the cultural observer

I’m taking a brief break from Horkheimer because he is so depressing and because I believe the second half of Eclipse of Reason may include new ideas that will take energy to internalize.

In the meantime, I’ve rediscovered Soren Brier’s Cybersemiotics: Why Information Is Not Enough! (2008), which has remained faithfully on my desk for months.

Brier is concerned with the possibility of meaning generally, and attempts to synthesize the positions of Pierce (recall: philosophically disliked by Horkheimer as a pragmatist), Wittgenstein (who first was an advocate of the formalization of reason and language in his Tractatus, then turned dramatically against it in his Philosophical Investigations), second-order cyberneticists like Varela and Maturana, and the social theorist Niklas Luhmann.

Brier does not make any concessions to simplicity. Rather, his approach is to begin with the simplest theories of communication (Shannon) and show where each fails to account for a more complex form of interaction between more completely defined organisms. In this way, he reveals how each simpler form of communication is the core around which a more elaborate form of meaning-making is formed. He finally arrives at a picture of meaning-making that encompasses all of reality, including that which can be scientifically understood, but one that is necessarily incomplete and an open system. Meaning is all-pervading but never all-encompassing.

One element that makes meaning more complex than simple Shannon-esque communication is the role of the observer, who is maintained semiotically through an accomplishment of self-reference through time. This observer is a product of her own contingency. The language she uses is the result of nature, AND history, AND her own lived life. There is a specificity to her words and meanings that radiates outward as she communicates, meanings that interact in cybernetic exchange with the specific meanings of other speakers/observers. Language evolves in an ecology of meaning that can only poorly be reflected back upon the speaker.

What then can be said of the cultural observer, who carefully gathers meanings, distills them, and expresses new ones conclusively? She is a cybernetic captain, steering the world in one way or another, but only the world she perceives and conceives. Perhaps this is Haraway’s cyborg, existing in time and space through a self-referential loop, reinforced by stories told again and again: “I am this, I am this, I am this.” It is by clinging to this identity that the cyborg achieves the partiality glorified by Haraway. It is also this identity that positions her as an antagonist as she must daily fight the forces of entropy that would dissolve her personality.

Built on cybernetic foundations, does anything in principle prevent the formalization and implementation of Brier’s semiotic logic? What would a cultural observer that stands betwixt all cultures, looming like a spider on the webs of communication that wrap the earth at inconceivable scale? Without the same constraints of partiality of one human observer, belonging to one culture, what could such a robot scientist see? What meaning would they make for themselves or intend?

This is not simply an issue of the interpretability of the algorithms used by such a machine. More deeply, it is the problem that these machines do not speak for themselves. They have no self-reference or identity, and so do not participate in meaning-making except instrumentally as infrastructure. This cultural observer that is in the position to observe culture in the making without the limits of human partiality for now only serves to amplify signal or dampen noise. The design is incomplete.

Horkheimer, pragmatism, and cognitive ecology

In Eclipse of Reason, Horkheimer rips into the American pragmatists Peirce, James, and Dewey like nobody I’ve ever read. Normally seen as reasonable and benign, Horkheimer paints these figures as ignorant and undermining of the whole social order.

The reason is that he believes that they reduce epistemology to a kind a instrumentalism. But that’s selling their position a bit short. Dewey’s moral epistemology is pragmatist in that it is driven by particular, situated interests and concerns, but these are ingredients to moral inquiry and not conclusions in themselves.

So to the extent that Horkheimer is looking to dialectic reason as the grounds to uncovering objective truths, Dewey’s emphasis on the establishing institutions that allow for meaningful moral inquiry seems consistent with Horkheimer’s view. The difference is in whether the dialectics are transcendental (as for Kant) or immanent (as for Hegel?).

The tension around objectivity in epistemology that comes up in the present academic environment is that all claims to objectivity are necessarily situated and this situatedness is raised as a challenge to their objective status. If the claims or their justification depend on conditions that exclude some subjects (as they no doubt do; whether or not dialectical reason is transcendental or immanent is requires opportunities for reflection that are rare–privileged), can these conclusions be said to be true for all subjects?

The Friendly AI research program more or less assumes that yes, this is the case. Yudkowsky’s notion of Coherent Extrapolated Volition–the position arrived at by simulated, idealized reasoners, is a 21st century remake of Peirce’s limiting consensus of the rational. And yet the cry from standpoint theorists and certain anthropologically inspired disciplines is a recognition of the validity of partial perspectives. Haraway, for example, calls for an alliance of partial perspectives. Critical and adversarial design folks appear to have picked up this baton. Their vision is of a future of constantly vying (“agonistic”) partiality, with no perspective presuming to be settled, objective or complete.

If we make cognitivist assumptions about the computationality of all epistemic agents, then we are forced to acknowledge the finiteness of all actually existing reasoning. Finite capacity and situatedness become two sides of the same coin. Partiality, then, becomes a function of both ones place in the network (eccentricity vs. centrality) as well as capacity to integrate information from the periphery. Those locations in the network most able to valuably integrate information, whether they be Google’s data centers or the conversational hubs of research universities, are more impartial, more objective. But they can never be the complete system. Because of their finite capacity, their representations can at best be lossy compressions of the whole.

A Hegelian might dream of an objective truth obtainable by a single subject through transcendental dialectic. Perhaps this is unattainable. But if there’s any hope at all in this direction, it seems to me it must come from one of two possibilities:

  • The fortuitously fractal structure of the sociotechnical world such that an adequate representation of it can be maintained in its epistemic hubs through quining, or
  • A generative grammar or modeling language of cognitive ecology such that we can get insights into the larger interactive system from toy models, and apply these simplified models pragmatically in specific cases. For this to work and not suffer the same failures as theoretical economics, these models need to have empirical content. Something like Wolpert, Lee, and Bono’s Predictive Game Theory (for which I just discovered they’ve released a Python package…cool!) may be critical here.

Reflecting on “Technoscience and Expressionism” by @FractalOntology

I’ve come across Joseph Weissman’s (@FractalOntology) “Technoscience and Expressionism” and am grateful for it, as its filled me in on a philosophical position that I missed the first time around, accelerationism. I’m not a Deleuzian and prefer my analytic texts to plod, so I can’t say I understood all of the essay. On the other hand, I gather the angle of this kind of philosophizing is intentionally psychotherapeutic and hence serves and artistic/literary function rather than one that explicitly guides praxis.

I am curious about the essay because I would like to see a thorough analysis of the political possibilities for the 21st century that gets past 20th century tropes. The passions of journalistic and intellectual debate have an atavistic tendency due to a lack of imagination that I would like to avoid in my own life and work.

Accelerationism looks new. It was pronounced in a manifesto, which is a good start.

Here is a quote from it:

Democracy cannot be defined simply by its means — not via voting, discussion, or general assemblies. Real democracy must be defined by its goal — collective self-​mastery. This is a project which must align politics with the legacy of the Enlightenment, to the extent that it is only through harnessing our ability to understand ourselves and our world better (our social, technical, economic, psychological world) that we can come to rule ourselves. We need to posit a collectively controlled legitimate vertical authority in addition to distributed horizontal forms of sociality, to avoid becoming the slaves of either a tyrannical totalitarian centralism or a capricious emergent order beyond our control. The command of The Plan must be married to the improvised order of The Network.

Hell yeah, the Enlightenment! Sign me up!

The manifesto calls for an end to the left’s emphasis on local action, transparency, and direct democracy. Rather, it calls for a muscular hegemonic left that fully employs and deploys “technoscience”.

It is good to be able to name this political tendency and distinguish it from other left tendencies. It is also good to distinguish it from “right accelerationism”, which Weissman identifies with billionaires who want to create exurb communities.

A left-accelerationist impulse is today playing out dramatically against a right-accelerationist one. And the right-accelerationists are about as dangerous as you may imagine. With silicon valley VCs, and libertarian technologists more generally reading Nick Land on geopolitical fragmentation, the reception or at least receptivity to hard-right accelerants seems problematically open (and the recent $2M campaign proposing the segmentation of California into six microstates seems to provide some evidence for this.) Billionaires consuming hard-right accelerationist materials arguing for hyper-secessionism undoubtedly amounts to a critically dangerous situation. I suspect that the right-accelerationist materials, perspectives, affect, energy expresses a similar shadow, if it is not partly what is catalyzing the resurgence of micro-fascisms elsewhere (and macro ones as well — perhaps most significant to my mind here is the overlap of right-acceleration with white nationalism, and more generally what is deplorably and disingenuously called “race realism” — and is of course simply racism; consider Marine le Pen’s fascist front, which recently won 25% of the seats in the French parliament, UKIP’s resurgence in Great Britain; while we may not hear accelerationist allegiances and watchwords explicitly, the political implications and continuity is at the very least somewhat unsettling…)

There is an unfortunate conflation of several different points of view here. It is too easy to associate racism, wealth, and libertarianism as these are the nightmares of the left’s political imagination. If ideological writing is therapeutic, a way of articulating ones dreams, then this is entirely appropriate with a caveat. The caveat being that every nightmare is a creation of ones own psychology more so than a reflection of the real world.

The same elisions are made by Sam Frank in his recent article thematizing Silicon Valley libertarianism, friendly artificial intelligence research, and contemporary rationalism as a self-help technique. There are interesting organizational ties between these institutions that are validly worth investigating but it would be lazy to collapse vast swathes of the intellectual spectrum into binaries.

In March 2013 I wrote about the Bay Area Rationalists:

There is a good story here, somewhere. If I were a journalist, I would get in on this and publish something about it, just because there is such a great opportunity for sensationalist exploitation.

I would like to say “I called it”–Sam Frank has recently written just such a sensationalist, exploitative piece in Harper’s Magazine. It is thoroughly enjoyable and I wouldn’t say it’s inaccurate. But I don’t think this is the best way to get to know these people. A better one is to attend a CFAR workshop. It used to be that you could avoid the fee with a promise to volunteer, but that there was a money-back guarantee which extended to ones promise to volunteer. If that’s still the case, then one can essentially attend for free.

Another way to engage this community intellectually, which I would encourage the left accelerationists to do because it’s interesting, is to start participating on LessWrong. For some reason this community is not subject to ideological raids like so many other community platforms. I think it could stand for an influx of Deleuze.

Ultimately the left/right divide comes down to a question of distribution of resources and/or surplus. Left accelerationist tactics appear from here to be a more viable way of seizing resources than direct democracy. However, the question is whether accelerationist tactics inevitably result in inequalities that create control structures of the kind originally objected to. In other words, this may simply be politics as usual and nothing radical at all.

So there’s an intersection between these considerations (accelerationist vs. … decelerationism? Capital accumulation vs. capital redistribution?) and the question of decentralization of decision-making process (is that the managerialism vs. multistakeholderism divide?) whose logic is unclear to me. I want to know which affinities are necessary and which are merely contingent.