## Tag: lyotard

### System 2 hegemony and its discontents

Recent conversations have brought me back to the third rail of different modalities of knowledge and their implications for academic disciplines. God help me. The chain leading up to this is: a reminder of how frustrating it was trying to work with social scientists who methodologically reject the explanatory power of statistics, an intellectual encounter with a 20th century “complex systems” theorist who also didn’t seem to understand statistics, and the slow realization that’s been bubbling up for me over the years that I probably need to write an article or book about the phenomenology of probability, because I can’t find anything satisfying about it.

The hypothesis I am now entertaining is that probabilistic or statistical reasoning is the intellectual crux, disciplinarily. What we now call “STEM” is all happy to embrace statistics as its main mode of empirical verification. This includes the use of mathematical proof for “exact” or a priori verification of methods. Sometimes the use of statistics is delayed or implicit; there is qualitative research that is totally consistent with statistical methods. But the key to this whole approach is that the fields, in combination, are striving for consistency.

But not everybody is on board with statistics! Why is that?

One reason may be because statistics is difficult to learn and execute. Doing probabilistic reasoning correctly is at times counter-intuitive. That means that quite literally it can make your head hurt to think about it.

There is a lot of very famous empirical cognitive psychology that has explored this topic in depth. The heuristics and biases research program of Kahneman and Tversky was critical for showing that human behavior rarely accords with decision-theoretic models of mathematical, probabilistic rationality. An intuitive, “fast”, prereflective form of thinking, (“System 1”) is capable of making snap judgments but is prone to biases such as the availability heuristic and the representativeness heuristic.

A couple general comments can be made about System 1. (These are taken from Tetlock’s review of this material in Superforecasting). First, a hallmark of System 1 is that it takes whatever evidence it is working with as given; it never second-guesses it or questions its validity. Second, System 1 is fantastic at provided verbal rationalizations and justifications of anything that it encounters, even when these can be shown to be disconnected from reality. Many colorful studies of split brain cases, but also many other lab experiments, show the willingness people have to make of stories to explain anything, and their unwillingness to say, “this could be due to one of a hundred different reasons, or a mix of them, and so I don’t know.”

The cognitive psychologists will also describe a System 2 cognitive process that is more deliberate and reflective. Presumably, this is the system that is sometimes capable of statistical or otherwise logical reasons. And a big part of statistical reasoning is questioning the source of your evidence. A robust application of System 2 reasoning is capable of overcoming System 1’s biases. At the level of institutional knowledge creation, the statistical sciences are comprised mainly of formalized, shared results of System 2 reasoning.

Tetlock’s work, from Expert Political Judgment and on, is remarkable for showing that deference to one or the other cognitive system is to some extent a robust personality trait. Famously, those of the “hedgehog” cognitive style, who apply System 1 and a simplistic theory of the world to interpret everything they experience, are especially bad at predicting the outcomes of political events (what are certainly the results of ‘complex systems’), whereas the “fox” cognitive style, which is more cautious about considering evidence and coming to judgments, outperforms them. It seems that Tetlock’s analysis weighs in favor of System 2 as a way of navigating complex systems.

I would argue that there are academic disciplines, especially those grounded in Heideggerian phenomenology, that see the “dominance” of institutions (such as academic disciplines) that are based around accumulations of System 2 knowledge as a problem or threat.

This reaction has several different guises:

• A simple rejection of cognitive psychology, which has exposed the System 1/System 2 distinction, as “behaviorism”. (This obscures the way cognitive psychology was a major break away from behaviorism in the 50’s.)
• A call for more “authentic experience”, couched in language suggesting ownership or the true subject of one’s experience, contrasting this with the more alienated forms of knowing that rely on scientific consensus.
• An appeal to originality: System 2 tends to converge; my System 1 methods can come up with an exciting new idea!
• The interpretivist methodological mandate for anthropological sensitivity to “emic”, or directly “lived experience”, of research subjects. This mandate sometimes blurs several individually valid motivations, such as: when emic experience is the subject matter in its own right, but (crucially) with the caveat that the results are not generalizable; when emic sensitivity is identified via the researcher’s reflexivity as a condition for research access; or when the purpose of the work is to surface or represent otherwise underrepresented views.

There are ways to qualify or limit these kinds of methodologies or commitments that makes them entirely above reproach. However, under these limits, their conclusions are always fragile. According to the hegemonic logic of System 2 institutions, a consensus of those thoroughly considering the statistical evidence can always supercede the “lived experience” of some group or individual. This is, at the methodological level, simply the idea that while we may make theory-laden observations, when those theories are disproved, those observations are invalidated as being influenced by erronenous theory. Indeed, mainstream scientific institutions take as their duty this kind of procedural objectivity. There is no such thing as science unless a lot of people are often being proven wrong.

This provokes a great deal of grievance. “Who made scientists, an unrepresentative class of people and machines disconnected from authentic experience, the arbiter of the real? Who are they to tell me I am wrong, or my experiences invalid?” And this is where we start to find trouble.

Perhaps most troubling is how this plays out at the level of psychodynamic politics. To have one’s lived experiences rejected, especially those lived experiences of trauma, and especially when those experiences are rejected wrongly, is deeply disturbing. One of the more mighty political tendencies of recent years has been the idea that whole classes of people are systematically subject to this treatment. This is one reason, among others, for influential calls for recalibrating the weight given to the experiences of otherwise marginalized people. This is what Furedi calls the therapeutic ethos of the Left. This is slightly different from, though often conflated with, the idea that recalibration is necessary to allow in more relevant data that was being otherwise excluded from consideration. This latter consideration comes up in a more managerialist discussion of creating technology that satisfies diverse stakeholders (…customers) through “participatory” design methods. The ambiguity of the term “bias”–does it mean a statistical error, or does it mean any tendency of an inferential system at all?–is sometimes leveraged to accomplish this conflation.

It is in practice very difficult to disentangle the different psychological motivations here. This is partly because they are deeply personal and mixed even at the level of the individual. (Highlighting this is why I have framed this in terms of the cognitive science literature). It is also partly because these issues are highly political as well. Being proven right, or wrong, has material consequences–sometimes. I’d argue: perhaps not as often as it should. But sometimes. And so there’s always a political interest, especially among those disinclined towards System 2 thinking, in maintaining a right to be wrong.

So it is hypothesized (perhaps going back to Lyotard) that at an institutional level there’s a persistent heterodox movement that rejects the ideal of communal intellectual integrity. Rather, it maintains that the field of authoritative knowledge must contain contradictions and disturbances of statistical scientific consensus. In Lyotard’s formulation, this heterodoxy seeks “legitimation by paralogy”, which suggests that its telos is at best a kind of creative intellectual emancipation from restrictive logics, generative of new ideas, but perhaps at worst a heterodoxy for its own sake.

This tendency has an uneasy relationship with the sociopolitical motive of a more integrated and representative society, which is often associated with the goal of social justice. If I understand these arguments directly, the idea is that, in practice, legitimized paralogy is a way of giving the underrepresented a platform. This has the benefits of increasing, visibly, representation. Here, paralogy is legitimized as a means of affirmative action, but not as a means improving system performance objectively.

This is a source of persistent difficulty and unease, as the paralogical tendency is never capable of truly emancipating itself, but rather, in its recuperated form, is always-already embedded in a hierarchy that it must deny to its initiates. Authenticity is subsumed, via agonism, to a procedural objectivity that proves it wrong.

A few weeks ago I went to a great talk by Victoria Stodden about how there’s a crisis of confidence in scientific research that depends on heavy computing. Long story short, because the data and code aren’t openly available, the results aren’t reproducible. That means there’s no check on prior research, and bad results can slip through and be the foundation for future work. This is bad.

Stodden’s solution was to push forward within the scientific community and possibly in legislation (i.e., as a requirement on state-funded research) for open data and code in research. Right on!

Then, something intriguing: somebody in the audience asked how this relates to open source development. Stodden, who just couldn’t stop saying amazing things that needed to be said that day, answered by saying that scientists have a lot to learn from the “open source world”, because they know how to build strong communities around their (open) work.

Looking around the room at this point, I saw several scientists toying with their laptops. I don’t think they were listening.

It’s a difficult thing coming from an open source background and entering academia, because the norms are close, but off.

The other day I wrote in an informal departmental mailing list a criticism and questions about a theorist with a lot of influence in the department, Bruno Latour. There were a lot of reactions to that thread that ranged pretty much all across the board, but one of the surprising reactions I got was along the lines of “I’m not going to do your work for you by answering your question about Latour.” In other words, RTFM. Except, in this case, “the manual” was a book or two of dense academic literature in a field that I was just beginning to dip into.

I don’t want to make too much of this response, since there were a lot of extenuating circumstances, but it did strike me as an indication of one of the cultural divides between open source development and academic scholarship. In the former, you want as many people as possible to understand and use your cool new thing because that enriches your community and makes your feel better about your contribution to the world. For some kinds of scholars, being the only one who understands a thing is a kind of distinction that gives you pride and job opportunities, so you don’t really want other people to know as much as you about it.

Similarly for computationally heavy sciences: if you think your job is to get grants to fund your research, you don’t really want anybody picking through it and telling you your methodology was busted. In an Internet Security course this semester, I’ve had the pleasure of reading John McHugh’s Testing Intrusion Detection Systems: A Critique of the 1998 and 1999 DARPA Off-line Intrusion Detection System Evaluation as Performed by Lincoln Laboratory. In this incredible paper, McHugh explains why a particular DARPA-funded Lincoln Labs Intrusion Detection research paper is BS, scientifically speaking.

In open source development, we would call McHugh’s paper a bug report. We would say, “McHugh is a great user of our research because he went through and tested for all these bugs, and even has recommendations about how to fix them. This is fantastic! The next release is going to be great.”

In the world of security research, Lincoln Labs complained to the publisher and got the article pulled.

Ok, so security research is a new field with a lot of tough phenomena to deal with and not a ton of time to read up on 300 years of epistemology, philosophy of science, statistical learning theory, or each others’ methodological critiques. I’m not faulting the research community at all. However, it does show some of the trouble that happens in a field that is born out of industry and military funding concerns without the pretensions or emphasis on reproducible truth-discovery that you get in, say, physics.

All of this, it so happens, is what Lyotard describes in his monograph, The Postmodern Condition (1979). Lyotard argues that because of cybernetics and information technologies, because of Wittgenstein, because of the “collapse of metanarratives” that would make anybody believe in anything silly like “truth”, there’s nothing left to legitimize knowledge except Winning.

You can win in two ways: you can research something that helps somebody beat somebody else up or consume more, so that they give you funding. Or you can win by not losing, by pulling some wild theoretical stunt that puts you out of range of everybody else so that they can’t come after you. You become good at critiquing things in ways that sound smart, and tell people who disagree with you that they haven’t read your cannon. You hope that if they call your bluff and read it, they will be so converted by the experience that they will leave you alone.

Some, but certainly not all, of academia seems like this. You can still find people around who believe in epistemic standards: rational deduction, dialectical critique resolving to a consensus, sound statistical induction. Often people will see these as just a kind of meta-methodology in service to a purely pragmatic ideal of something that works well or looks pretty or makes you think in a new way, but that in itself isn’t so bad. Not everybody should be anal about methodology.

But these standards are in tension with the day to day of things, because almost nobody really believes that they are after true ideas any more. It’s so easy to be cynical or territorial.

What seems to be missing is a sense of common purpose in academic work. Maybe it’s the publication incentive structure, maybe it’s because academia is an ideological proxy for class or sex warfare, maybe it’s because of a lot of big egos, maybe it’s the collapse of meta-narratives.

In FOSS development, there’s a secret ethic that’s not particularly well articulated by either the Free Software Movement or the Open Source Initiative, but which I believe is shared by a lot of developers. It goes something like this:

I’m going to try to build a totally great new thing. It’s going to be a lot of work, but it will be worth it because it’s going to be so useful and cool. Gosh, it would be helpful if other people worked on it with me, because this is a lonely pursuit and having others work with me will help me know I’m not chasing after a windmill. If somebody wants to work on it with me, I’m going to try hard to give them what they need to work on it. But hell, even if somebody tells me they used it and found six problems in it, that’s motivating; that gives me something to strive for. It means I have (or had) a user. Users are awesome; they make my heart swell with pride. Also, bonus, having lots of users means people want to pay me for services or hire me or let me give talks. But it’s not like I’m trying to keep others out of this game, because there is just so much that I wish we could build and not enough time! Come on! Let’s build the future together!

I think this is the sort of ethic that leads to the kind of community building that Stodden was talking about. It requires a leap of faith: that your generosity will pay off and that the world won’t run out of problems to be solved. It requires self-confidence because you have to believe that you have something (even something small) to offer that will make you a respected part of an open community without walls to shelter you from criticism. But this ethic is the relentlessly spreading meme of the 21st century and it’s probably going to be victorious by the start of the 22nd. So if we want our academic work to have staying power we better get on this wagon early so we can benefit from the centrality effects in the growing openly collaborative academic network.

I heard David Weinberger give a talk last year on his new book Too Big to Know, in which he argued that “the next Darwin” was going to be actively involved in social media as a research methodology. Tracing their research notes will involve an examination of their inbox and facebook feed to see what conversations were happening, because just so much knowledge transfer is happening socially and digitally and it’s faster and more contextual than somebody spending a weekend alone reading books in a library. He’s right, except maybe for one thing, which is that this digital dialectic (or pluralectic) implies that “the next Darwin” isn’t just one dude, Darwin, with his own ‘-ism’ and pernicious Social adherents. Rather, it means that the next great theory of the origin of species is going to be built by a massive collaborative effort in which lots of people will take an active part. The historical record will show their contributions not just with the clumsy granularity of conference publications and citations, but with minute granularity of thousands of traced conversations. The theory itself will probably be too complicated for any one person to understand, but that’s OK, because it will be well architected and there will be plenty of domain experts to go to if anyone has problems with any particular part of it. And it will be growing all the time and maybe competing with a few other theories. For a while people might have to dual boot their brains until somebody figures out how to virtualize Foucauldean Quantum Mechanics on a Organic Data Splicing ideological platform, but one day some crazy scholar-hacker will find a way.

“Cool!” they will say, throwing a few bucks towards the Kickstarter project for a musical instrument that plays to the tune of the uncollapsed probabilistic power dynamics playing out between our collated heartbeats.

Does that future sound good? Good. Because it’s already starting. It’s just an evolution of the way things have always been, and I’m pretty sure based on what I’ve been hearing that it’s a way of doing things that’s picking of steam. It’s just not “normal” yet. Generation gap, maybe. That’s cool. At the rate things are changing, it will be here before you know it.