Digifesto

Tag: heidegger

System 2 hegemony and its discontents

Recent conversations have brought me back to the third rail of different modalities of knowledge and their implications for academic disciplines. God help me. The chain leading up to this is: a reminder of how frustrating it was trying to work with social scientists who methodologically reject the explanatory power of statistics, an intellectual encounter with a 20th century “complex systems” theorist who also didn’t seem to understand statistics, and the slow realization that’s been bubbling up for me over the years that I probably need to write an article or book about the phenomenology of probability, because I can’t find anything satisfying about it.

The hypothesis I am now entertaining is that probabilistic or statistical reasoning is the intellectual crux, disciplinarily. What we now call “STEM” is all happy to embrace statistics as its main mode of empirical verification. This includes the use of mathematical proof for “exact” or a priori verification of methods. Sometimes the use of statistics is delayed or implicit; there is qualitative research that is totally consistent with statistical methods. But the key to this whole approach is that the fields, in combination, are striving for consistency.

But not everybody is on board with statistics! Why is that?

One reason may be because statistics is difficult to learn and execute. Doing probabilistic reasoning correctly is at times counter-intuitive. That means that quite literally it can make your head hurt to think about it.

There is a lot of very famous empirical cognitive psychology that has explored this topic in depth. The heuristics and biases research program of Kahneman and Tversky was critical for showing that human behavior rarely accords with decision-theoretic models of mathematical, probabilistic rationality. An intuitive, “fast”, prereflective form of thinking, (“System 1”) is capable of making snap judgments but is prone to biases such as the availability heuristic and the representativeness heuristic.

A couple general comments can be made about System 1. (These are taken from Tetlock’s review of this material in Superforecasting). First, a hallmark of System 1 is that it takes whatever evidence it is working with as given; it never second-guesses it or questions its validity. Second, System 1 is fantastic at provided verbal rationalizations and justifications of anything that it encounters, even when these can be shown to be disconnected from reality. Many colorful studies of split brain cases, but also many other lab experiments, show the willingness people have to make of stories to explain anything, and their unwillingness to say, “this could be due to one of a hundred different reasons, or a mix of them, and so I don’t know.”

The cognitive psychologists will also describe a System 2 cognitive process that is more deliberate and reflective. Presumably, this is the system that is sometimes capable of statistical or otherwise logical reasons. And a big part of statistical reasoning is questioning the source of your evidence. A robust application of System 2 reasoning is capable of overcoming System 1’s biases. At the level of institutional knowledge creation, the statistical sciences are comprised mainly of formalized, shared results of System 2 reasoning.

Tetlock’s work, from Expert Political Judgment and on, is remarkable for showing that deference to one or the other cognitive system is to some extent a robust personality trait. Famously, those of the “hedgehog” cognitive style, who apply System 1 and a simplistic theory of the world to interpret everything they experience, are especially bad at predicting the outcomes of political events (what are certainly the results of ‘complex systems’), whereas the “fox” cognitive style, which is more cautious about considering evidence and coming to judgments, outperforms them. It seems that Tetlock’s analysis weighs in favor of System 2 as a way of navigating complex systems.

I would argue that there are academic disciplines, especially those grounded in Heideggerian phenomenology, that see the “dominance” of institutions (such as academic disciplines) that are based around accumulations of System 2 knowledge as a problem or threat.

This reaction has several different guises:

  • A simple rejection of cognitive psychology, which has exposed the System 1/System 2 distinction, as “behaviorism”. (This obscures the way cognitive psychology was a major break away from behaviorism in the 50’s.)
  • A call for more “authentic experience”, couched in language suggesting ownership or the true subject of one’s experience, contrasting this with the more alienated forms of knowing that rely on scientific consensus.
  • An appeal to originality: System 2 tends to converge; my System 1 methods can come up with an exciting new idea!
  • The interpretivist methodological mandate for anthropological sensitivity to “emic”, or directly “lived experience”, of research subjects. This mandate sometimes blurs several individually valid motivations, such as: when emic experience is the subject matter in its own right, but (crucially) with the caveat that the results are not generalizable; when emic sensitivity is identified via the researcher’s reflexivity as a condition for research access; or when the purpose of the work is to surface or represent otherwise underrepresented views.

There are ways to qualify or limit these kinds of methodologies or commitments that makes them entirely above reproach. However, under these limits, their conclusions are always fragile. According to the hegemonic logic of System 2 institutions, a consensus of those thoroughly considering the statistical evidence can always supercede the “lived experience” of some group or individual. This is, at the methodological level, simply the idea that while we may make theory-laden observations, when those theories are disproved, those observations are invalidated as being influenced by erronenous theory. Indeed, mainstream scientific institutions take as their duty this kind of procedural objectivity. There is no such thing as science unless a lot of people are often being proven wrong.

This provokes a great deal of grievance. “Who made scientists, an unrepresentative class of people and machines disconnected from authentic experience, the arbiter of the real? Who are they to tell me I am wrong, or my experiences invalid?” And this is where we start to find trouble.

Perhaps most troubling is how this plays out at the level of psychodynamic politics. To have one’s lived experiences rejected, especially those lived experiences of trauma, and especially when those experiences are rejected wrongly, is deeply disturbing. One of the more mighty political tendencies of recent years has been the idea that whole classes of people are systematically subject to this treatment. This is one reason, among others, for influential calls for recalibrating the weight given to the experiences of otherwise marginalized people. This is what Furedi calls the therapeutic ethos of the Left. This is slightly different from, though often conflated with, the idea that recalibration is necessary to allow in more relevant data that was being otherwise excluded from consideration. This latter consideration comes up in a more managerialist discussion of creating technology that satisfies diverse stakeholders (…customers) through “participatory” design methods. The ambiguity of the term “bias”–does it mean a statistical error, or does it mean any tendency of an inferential system at all?–is sometimes leveraged to accomplish this conflation.

It is in practice very difficult to disentangle the different psychological motivations here. This is partly because they are deeply personal and mixed even at the level of the individual. (Highlighting this is why I have framed this in terms of the cognitive science literature). It is also partly because these issues are highly political as well. Being proven right, or wrong, has material consequences–sometimes. I’d argue: perhaps not as often as it should. But sometimes. And so there’s always a political interest, especially among those disinclined towards System 2 thinking, in maintaining a right to be wrong.

So it is hypothesized (perhaps going back to Lyotard) that at an institutional level there’s a persistent heterodox movement that rejects the ideal of communal intellectual integrity. Rather, it maintains that the field of authoritative knowledge must contain contradictions and disturbances of statistical scientific consensus. In Lyotard’s formulation, this heterodoxy seeks “legitimation by paralogy”, which suggests that its telos is at best a kind of creative intellectual emancipation from restrictive logics, generative of new ideas, but perhaps at worst a heterodoxy for its own sake.

This tendency has an uneasy relationship with the sociopolitical motive of a more integrated and representative society, which is often associated with the goal of social justice. If I understand these arguments directly, the idea is that, in practice, legitimized paralogy is a way of giving the underrepresented a platform. This has the benefits of increasing, visibly, representation. Here, paralogy is legitimized as a means of affirmative action, but not as a means improving system performance objectively.

This is a source of persistent difficulty and unease, as the paralogical tendency is never capable of truly emancipating itself, but rather, in its recuperated form, is always-already embedded in a hierarchy that it must deny to its initiates. Authenticity is subsumed, via agonism, to a procedural objectivity that proves it wrong.

Instrumentality run amok: Bostrom and Instrumentality

Narrowing our focus onto the crux of Bostrom’s argument, we can see how tightly it is bound to a much older philosophical notion of instrumental reason. This comes to the forefront in his discussion of the orthogonality thesis (p.107):

The orthogonality thesis
Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.

Bostrom goes on to clarify:

Note that the orthogonality thesis speaks not of rationality or reason, but of intelligence. By “intelligence” we here mean something like skill at prediction, planning, and means-ends reasoning in general. This sense of instrumental cognitive efficaciousness is most relevant when we are seeking to understand what the causal impact of a machine superintelligence might be.

Bostrom maintains that the generality of instrumental intelligence, which I would argue is evinced by the generality of computing, gives us a way to predict how intelligent systems will act. Specifically, he says that an intelligent system (and specifically a superintelligent) might be predictable because of its design, because of its inheritance of goals from a less intelligence system, or because of convergent instrumental reasons. (p.108)

Return to the core logic of Bostrom’s argument. The existential threat posed by superintelligence is simply that the instrumental intelligence of an intelligent system will invest in itself and overwhelm any ability by us (its well-intentioned creators) to control its behavior through design or inheritance. Bostrom thinks this is likely because instrumental intelligence (“skill at prediction, planning, and means-ends reasoning in general”) is a kind of resource or capacity that can be accumulated and put to other uses more widely. You can use instrumental intelligence to get more instrumental intelligence; why wouldn’t you? The doomsday prophecy of a fast takeoff superintelligence achieving a decisive strategic advantage and becoming a universe-dominating singleton depends on this internal cycle: instrumental intelligence investing in itself and expanding exponentially, assuming low recalcitrance.

This analysis brings us to a significant focal point. The critical missing formula in Bostrom’s argument is (specifically) the recalcitrance function of instrumental intelligence. This is not the same as recalcitrance with respect to “general” intelligence or even “super” intelligence. Rather, what’s critical is how much a process dedicated to “prediction, planning, and means-ends reasoning in general” can improve its own capacities at those things autonomously. The values of this recalcitrance function will bound the speed of superintelligence takeoff. These bounds can then inform the optimal allocation of research funding towards anticipation of future scenarios.


In what I hope won’t distract from the logical analysis of Bostrom’s argument, I’d like to put it in a broader context.

Take a minute to think about the power of general purpose computing and the impact it has had on the past hundred years of human history. As the earliest digital computers were informed by notions of artificial intelligence (c.f. Alan Turing), we can accurately say that the very machine I use to write this text, and the machine you use to read it, are the result of refined, formalized, and materialized instrumental reason. Every programming language is a level of abstraction over a machine that has no ends in itself, but which serves the ends of its programmer (when it’s working). There is a sense in which Bostrom’s argument is not about a near future scenario but rather is just a description of how things already are.

Our very concepts of “technology” and “instrument” are so related that it can be hard to see any distinction at all. (c.f. Heidegger, “The Question Concerning Technology“) Bostrom’s equating of instrumentality with intelligence is a move that makes more sense as computing becomes ubiquitously part of our experience of technology. However, if any instrumental mechanism can be seen as a form of intelligence, that lends credence to panpsychist views of cognition as life. (c.f. the Santiago theory)

Meanwhile, arguably the genius of the market is that it connects ends (through consumption or “demand”) with means (through manufacture and services, or “supply”) efficiently, bringing about the fruition of human desire. If you replace “instrumental intelligence” with “capital” or “money”, you get a familiar critique of capitalism as a system driven by capital accumulation at the expense of humanity. The analogy with capital accumulation is worthwhile here. Much as in Bostrom’s “takeoff” scenarios, we can see how capital (in the modern era, wealth) is reinvested in itself and grows at an exponential rate. Variable rates of return on investment lead to great disparities in wealth. We today have a “multipolar scenario” as far as the distribution of capital is concerned. At times people have advocated for an economic “singleton” through a planned economy.

It is striking that contemporary analytic philosopher and futurist Nick Bostrom’s contemplates the same malevolent force in his apocalyptic scenario as does Max Horkheimer in his 1947 treatise “Eclipse of Reason“: instrumentality run amok. Whereas Bostrom concerns himself primarily with what is literally a machine dominating the world, Horkheimer sees the mechanism of self-reinforcing instrumentality as pervasive throughout the economic and social system. For example, he sees engineers as loci of active instrumentalism. Bostrom never cites Horkheimer, let alone Heidegger. That there is a convergence of different philosophical sub-disciplines on the same problem suggests that there are convergent ultimate reasons which may triumph over convergent instrumental reasons in the end. The question of what these convergent ultimate reasons are, and what their relationship to instrumental reasons is, is a mystery.