Digifesto

Tag: eclipse of reason

Instrumentality run amok: Bostrom and Instrumentality

Narrowing our focus onto the crux of Bostrom’s argument, we can see how tightly it is bound to a much older philosophical notion of instrumental reason. This comes to the forefront in his discussion of the orthogonality thesis (p.107):

The orthogonality thesis
Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.

Bostrom goes on to clarify:

Note that the orthogonality thesis speaks not of rationality or reason, but of intelligence. By “intelligence” we here mean something like skill at prediction, planning, and means-ends reasoning in general. This sense of instrumental cognitive efficaciousness is most relevant when we are seeking to understand what the causal impact of a machine superintelligence might be.

Bostrom maintains that the generality of instrumental intelligence, which I would argue is evinced by the generality of computing, gives us a way to predict how intelligent systems will act. Specifically, he says that an intelligent system (and specifically a superintelligent) might be predictable because of its design, because of its inheritance of goals from a less intelligence system, or because of convergent instrumental reasons. (p.108)

Return to the core logic of Bostrom’s argument. The existential threat posed by superintelligence is simply that the instrumental intelligence of an intelligent system will invest in itself and overwhelm any ability by us (its well-intentioned creators) to control its behavior through design or inheritance. Bostrom thinks this is likely because instrumental intelligence (“skill at prediction, planning, and means-ends reasoning in general”) is a kind of resource or capacity that can be accumulated and put to other uses more widely. You can use instrumental intelligence to get more instrumental intelligence; why wouldn’t you? The doomsday prophecy of a fast takeoff superintelligence achieving a decisive strategic advantage and becoming a universe-dominating singleton depends on this internal cycle: instrumental intelligence investing in itself and expanding exponentially, assuming low recalcitrance.

This analysis brings us to a significant focal point. The critical missing formula in Bostrom’s argument is (specifically) the recalcitrance function of instrumental intelligence. This is not the same as recalcitrance with respect to “general” intelligence or even “super” intelligence. Rather, what’s critical is how much a process dedicated to “prediction, planning, and means-ends reasoning in general” can improve its own capacities at those things autonomously. The values of this recalcitrance function will bound the speed of superintelligence takeoff. These bounds can then inform the optimal allocation of research funding towards anticipation of future scenarios.


In what I hope won’t distract from the logical analysis of Bostrom’s argument, I’d like to put it in a broader context.

Take a minute to think about the power of general purpose computing and the impact it has had on the past hundred years of human history. As the earliest digital computers were informed by notions of artificial intelligence (c.f. Alan Turing), we can accurately say that the very machine I use to write this text, and the machine you use to read it, are the result of refined, formalized, and materialized instrumental reason. Every programming language is a level of abstraction over a machine that has no ends in itself, but which serves the ends of its programmer (when it’s working). There is a sense in which Bostrom’s argument is not about a near future scenario but rather is just a description of how things already are.

Our very concepts of “technology” and “instrument” are so related that it can be hard to see any distinction at all. (c.f. Heidegger, “The Question Concerning Technology“) Bostrom’s equating of instrumentality with intelligence is a move that makes more sense as computing becomes ubiquitously part of our experience of technology. However, if any instrumental mechanism can be seen as a form of intelligence, that lends credence to panpsychist views of cognition as life. (c.f. the Santiago theory)

Meanwhile, arguably the genius of the market is that it connects ends (through consumption or “demand”) with means (through manufacture and services, or “supply”) efficiently, bringing about the fruition of human desire. If you replace “instrumental intelligence” with “capital” or “money”, you get a familiar critique of capitalism as a system driven by capital accumulation at the expense of humanity. The analogy with capital accumulation is worthwhile here. Much as in Bostrom’s “takeoff” scenarios, we can see how capital (in the modern era, wealth) is reinvested in itself and grows at an exponential rate. Variable rates of return on investment lead to great disparities in wealth. We today have a “multipolar scenario” as far as the distribution of capital is concerned. At times people have advocated for an economic “singleton” through a planned economy.

It is striking that contemporary analytic philosopher and futurist Nick Bostrom’s contemplates the same malevolent force in his apocalyptic scenario as does Max Horkheimer in his 1947 treatise “Eclipse of Reason“: instrumentality run amok. Whereas Bostrom concerns himself primarily with what is literally a machine dominating the world, Horkheimer sees the mechanism of self-reinforcing instrumentality as pervasive throughout the economic and social system. For example, he sees engineers as loci of active instrumentalism. Bostrom never cites Horkheimer, let alone Heidegger. That there is a convergence of different philosophical sub-disciplines on the same problem suggests that there are convergent ultimate reasons which may triumph over convergent instrumental reasons in the end. The question of what these convergent ultimate reasons are, and what their relationship to instrumental reasons is, is a mystery.

Horkheimer and “The Revolt of Nature”

The third chapter of Horkheimer’s Eclipse of Reason (which by the way is apparently available here as a PDF) is titled “The Revolt of Nature”.

It opens with a reiteration of the Frankfurt School story: as reason gets formalized, society gets rationalized. “Rationalized” here is in the sense that goes back at least to Lukacs’s “Reification and the Consciousness of the Proletariat” in 1923. It refers to the process of being rendered predictable, and being treated as such. It’s this formalized reason that is a technique of prediction and predictability, but which is unable to furnish an objective ethics, that is the main subject of Horkheimer’s critique.

In “The Revolt of Nature”, Horkheimer claims that as more and more of society is rationalized, the more humanity needs to conform to the rationalizing system. This happens through the labor market. Predictable technology and working conditions such as the factory make workers more interchangeable in their jobs. Thus they are more “free” in a formal sense, but at the same time have less job security and so have to conform to economic forces that make them into means and not ends in themselves.

Recall that this is written in 1947, and Lukacs wrote in 1923. In recent years we’ve read a lot about the Sharing Economy and how it leads to less job security. This is an argument that is almost a century old.

As society and humanity in it conform more and more to rational, pragmatic demands on them, the element of man that is irrational, that is nature, is not eliminated. Horkheimer is implicitly Freudian. You don’t eradicate the natural impulses. You repress them. And what is repressed must revolt.

This view runs counter to some of the ideology of the American academic system that became more popular in the late 20th century. Many ideologues reject the idea of human nature at all, arguing that all human behavior can be attributed to socialization. This view is favored especially by certain extreme progressives, who have a post-Christian ideal of eradicating sin through media criticism and scientific intervention. Steven Pinker’s The Blank Slate is an interesting elaboration and rebuttal of this view. Pinker is hated by a lot of academics because (a) he writes very popular books and (b) he makes a persuasive case against the total mutability of human nature, which is something of a sacred cow to a lot of social scientists for some reason.

I’d argue that Horkheimer would agree with Pinker that there is such a thing as human nature, since he explicitly argues that repressed human nature will revolt against dominating rationalizing technology. But because rationalization is so powerful, the revolt of nature becomes part of the overall system. It helps sustain it. Horkheimer mentions “engineered” race riots. Today we might point to the provocation of bestial, villainous hate speech and its relationship to the gossip press. Or we might point to ISIS and the justification it provides for the military-industrial complex.

I don’t want to imply I endorse this framing 100%. It is just the continuation of Frankfurt School ideas to the present day. How they match up against reality is an empirical question. But it’s worth pointing out how many of these important tropes originated.

“Conflicting panaceas”; decapitation and dogmatism in cultural studies counterpublics

I’m still reading through Horkheimer’s Eclipse of Reason. It is dense writing and slow going. I’m in the middle of the second chapter, “Conflicting Panaceas”.

This chapter recognizes and then critiques a variety of intellectual stances of his contemporaries. Whereas in the first chapter Horkheimer takes aim at pragmatism, in this he concerns himself with neo-Thomism and positivism.

Neo-Thomism? Yes, that’s right. Apparently in 1947 one of the major intellectual contenders was a school of thought based on adapting the metaphysics of Saint Thomas Aquinas to modern times. This school of thought was apparently notable enough that while Horkheimer is generally happy to call out the proponents of pragmatism and positivism by name and call them business interest lapdogs, he chooses instead to address the neo-Thomists anonymously in a conciliatory footnote

This important metaphysical school includes some of the most responsible historians and writers of our day. The critical remarks here bear exclusively on the trend by which independent philosophical thought is being superseded by dogmatism.

In a nutshell, Horkheimer’s criticism of neo-Thomism is that it is that since it tries and fails to repurpose old ontologies to the new world, it can’t fulfill its own ambitions as an intellectual system through rigor without losing the theological ambitions that motivate it, the identification of goodness, power, and eternal law. Since it can’t intellectually culminate, it becomes a “dogmatism” that can be coopted disingenuously by social forces.

This is, as I understand it, the essence of Horkheimer’s criticism of everything: That for any intellectual trend or project, unless the philosophical project is allowed to continue to completion within it, it will have its brains slurped out and become zombified by an instrumentalist capitalism that threatens to devolve into devastating world war. Hence, just as neo-Thomism becomes a dogmatism because it would refute itself if it allowed its logic to proceed to completion, so too does positivism become a dogmatism when it identifies the truth with disciplinarily enforced scientific methods. Since, as Horkheimer points out in 1947, these scientific methods are social processes, this dogmatic positivism is another zombie, prone to fads and politics not tracking truth.

I’ve been struggling over the past year or so with similar anxieties about what from my vantage point are prevailing intellectual trends of 2014. Perversely, in my experience the new intellectual identities that emerged to expose scientific procedures as social processes in the 20th century (STS) and establish rhetorics of resistance (cultural studies) have been similarly decapitated, recuperated, and dogmatic. [see 1 2 3].

Are these the hauntings of straw men? This is possible. Perhaps the intellectual currents I’ve witnessed are informal expressions, not serious intellectual work. But I think there is a deeper undercurrent which has turned up as I’ve worked on a paper resulting from this conversation about publics. It hinges on the interpretation of an influential article by Fraser in which she contests Habermas’s notion of the public sphere.

In my reading, Fraser more or less maintains the ideal of the public sphere as a place of legitimacy and reconciliation. For her it is notably inequitable, it is plural not singular, the boundaries of what is public and private are in constant negotiation, etc. But its function is roughly the same as it is for Habermas.

My growing suspicion is that this is not how Fraser is used by cultural studies today. This suspicion began when Fraser was introduced to me; upon reading her work I did not find the objection implicit in the reference to her. It continued as I worked with the comments of a reviewer on a paper. It was recently confirmed while reading Chris Wisniewski’s “Digital Deliberation ?” in Critical Review, vol 25, no. 2, 2013. He writes well:

The cultural-studies scholars and critical theorists interested in diversifying participation through the Internet have made a turn away from this deliberative ideal. In an essay first published in 1990, the critical theorist Nancy Fraser (1999, 521) rejects the idealized model of bourgeois public sphere as defined by Habermas on the grounds that it is exclusionary by design. Because the bourgeois public sphere brackets hierarchies of gender, race, ethnicity, class, etc., Fraser argues, it benefits the interests of dominant groups by default through its elision of socially significant inequalities. Lacking the ability to participate in the dominant discourse, disadvantaged groups establish alternative “subaltern counterpublics”.

Since the ideal speech situation does not acknowledge the socially significant inequalities that generate these counterpublics, Fraser argues for a different goal: a model of participatory democracy in which intercultural communications across socially stratified groups occur in forums that do not elide differences but intead allow diverse multiple publics the opportunity to determine the concerns or good of the public as a whole through “discursive contestations.” Fraser approaches thes subgroups as identity publics and argues that culture and political debate are essentially power struggles among self-interested subgroups. Fraser’s ideas are similar to those prevalent in cultural studies (see Wisneiwski 2007 and 2010), a relatively young discipline in which her work has been influential.

Fraser’s theoretical model is inconsistent with studies of democratic voting behavior, which indicate that people tend to vote sociotropically, according to a perceived collective interest, and not in facor of their own perceived self-interest (e.g., Kinder and Kiewiet 1981). The argument that so-called “mass” culture excludes the interests of dominated groups in favor of the interests of the elites loses some of its valence if culture is not a site through which self-interested groups vie for their objective interests, but is rather a forum in which democratic citizens debate what constitutes, and the best way to achieve, the collective good. Diversification of discourse ceases to be an end in itself.”

I think Wisneiwski hits the nail on the head here, a nail I’d like to drive in farther. If culture is conceived of as consisting of the contests of self-interested identity groups, as this version of cultural studies does, then it will necessarily see itself as one of many self-interested identities. Cultural studies becomes, by its own logic, a counterpublic that exists primarily to advance its own interests.

But just like neo-Thomism, this positioning decapitates cultural studies by preventing it from intellectually confronting its own limitations. No identity can survive rigorous intellectual interrogation, because all identities are based on contingency, finitude, and trauma. Cultural studies adopt and repurpose historical rhetorics of liberation much like neo-Thomists adopted and repurposed historical metaphysics of Christianity. The obsolescence of these rhetorics, like the obsolescence of Thomistic metaphysics, is what makes them dangerous. The rhetoric that maintains its own subordination as a condition of its own identity can never truly liberate, it can only antagonize. Unable to intellectually realize its own purpose, it becomes purposeless and hence coopted and recuperated like other dogmatisms. In particular, it feeds into “the politicization of absolutely everything”, in the language of Ezra Klein’s spot-on analysis of GamerGate. Cultural studies is a powerful ideology because it turns culture into a field of perpetual rivalry with all the distracting drama of reality television. In so doing, it undermines deeper intellectual penetration into the structural conditions of society.

If cultural studies is the neo-Thomism of today, a dogmatist religious revival of the profound theology of the civil rights movement, perhaps it’s the theocratic invocation of ‘algorithms’ that is the new scientism. I would have more to say about it if it weren’t so similar to the old scientism.