Digifesto

several words, all in a row, some about numbers

I am getting increasingly bewildered by number of different paradigms available in academic research. Naively, I had thought I had a pretty good handle on this sort of thing coming into it. After trying to tackle the subject head on this semester, I feel like my head will explode.

I’m going to try to break down the options.

  • Nobody likes positivism, which went out of style when Wittgenstein refuted his own Tractatus.
  • Postpositivists say, “Sure, there isn’t really observer-independent inquiry, but we can still approximate that through rigorous methods.” The goal is an accurate description of the subject matter. I suppose this fits into a vision of science being about prediction and control of the environment, so generalizability of results would be considered important. I’d argue that this is also consistent with American pragmatism. I think “postpositivist” is a terrible name and would rather talk/think about pragmatism.
  • Interpretivism, which seems to be a more fashionable term than antipositivism, is associated with Weber and Frankfurt school thinkers, as well as a feminist critique. The goal is for one reader (or scholarly community?) to understand another. “Understanding” here is understood intersubjectively–“I get you”. Interpretivists are skeptical of prediction and control as provided by a causal understanding. At times, this skepticism is expressed as a belief that causal understanding (of people) is impossible; other times it is expressed as a belief that causal understanding is nefarious.

Both teams share a common intellectual ancestor in Immanuel Kant, who few people bother to read.

Habermas has room in his overarching theory for multiple kinds of inquiry–technical, intersubjective, and emancipatory/dramaturgical–but winds up getting mobilized by the interpretivists. I suspect this is the case because research aimed at prediction and control is better funded, because it is more instrumental to power. And if you’ve got funding there’s little incentive to look to Habermas for validation.

It’s worth noting that mathematicians still basically run their own game. You can’t beat pure reason at the research game. Much computer science research falls into this category. Pragmatists will take advantage of mathematical reasoning. I think interpretivists find mathematics a bit threatening because it seems like the only way to “interpet” mathematicians is by learning the math that they are talking about. When intersubjective understanding requires understanding verbatim, that suggests the subject matter is more objectively true than not.

The gradual expansion of computer science towards the social science through “big data” analysis can be seen as a gradual expansion of what can be considered under mathematical closure.

Physicists still want to mathematize their descriptions of the universe. Some psychologists want to mathematize their descriptions. Some political scientists, sociologists, etc. want to mathematize their descriptions. Anthropologists don’t want to mathematize their descriptions. Mathematization is at the heart of the quantitative/qualitative dispute.

It’s worth noting that there are non-mathematized predictive theories, as well as mathematized theories that pretty much fail to predict anything.

digital qualities: some meditations on methodology

Text is a kind of data that is both qualitative (interpretable for the qualities it conveys) and qualitative (characterized by certain amounts of certain abstract tokens arranged in a specific order).

Statistical learning techniques are able to extract qualitative distinctions from quantitative data, through clustering processes for example. Non-parametric statistical methods allow qualitative distinctions to be extracted from quantitative data without specifying particular structure or features up front.

Many cognitive scientists and computational neuroscientists believe that this is more or less how perception works. The neurons in our eyes (for example) provide a certain kind of data to downstream neurons, which activate according to quantifiable regularities in neuron activation. A qualitative difference that we perceive is due to a statistical aggregation of these inputs in the context of a prior, physically definite, field of neural connectivity.

A source of debate in the social sciences is the relationship between qualitative and quantitative research methods. As heirs to the methods of harder sciences whose success is indubitable, quantitative research is often assumed to be credible up to the profound limits of its method. A significant amount of ink has been spilled distinguishing qualitative research from quantitative research and justifying it in the face of skeptical quantitative types.

Qualitative researchers, as a rule, work with text. This is trivially true due to the fact that a limiting condition of qualitative research appears to be the creation of a document explicating the research conclusions. But if we are to believe several instructional manuals on qualitative research, then the work of an e.g. ethnographer involves jottings, field notes, interview transcripts, media transcripts, coding of notes, axial coding of notes, theoretical coding of notes, or, more broadly, the noting of narratives (often written down), the interpreting of text, a hermeneutic exposition of hermeneutic expositions ad infinitum down an endless semiotic staircase.

Computer assisted qualitative data analysis software passes the Wikipedia test for “does it exist”.

Data processed by computers is necessarily quantitative. Hence, qualitative data is necessarily quantitative. This is unsurprising, since so much qualitative data is text. (See above).

We might ask: what makes the work qualitative researchers do qualitative as opposed to quantitative, if the data they work with with quantitative? We could answer: it’s their conclusions that are qualitative.

But so are the conclusions of a quantitative researcher. A hypothesis is, generally speaking, a qualitative assessment, that is then operationalized into a prediction whose correspondence with data can be captured quantitatively through a statistical model. The statistical apparatus is meant to guide our expectations of the generalizability of results.

Maybe the qualitative researcher isn’t trying to get generalized results. Maybe they are just reporting a specific instance. Maybe generalizations are up to the individual interpreter. Maybe social scientific research can only apply and elaborate on an ideal type, tell a good story. All further insight is beyond the purview of the social sciences.

Hey, I don’t mean to be insensitive about this, but I’ve got two practical considerations: first, do you expect anyone to pay you for research that is literally ungeneralizable? That has no predictive or informative impact on the future? Second, if you believe that, aren’t you basically giving up all ground on social prediction to economists? Do you really want that?

Then there’s the mixed methods researcher. Or, the researcher who in principle admits that mixed methods are possible. Sure, the quantitative folks are cool. We’d just rather be interviewing people because we don’t like math.

That’s alright. Math isn’t for everybody. It would be nice if computers did it for us. (See above)

What some people say is: qualitative research generates hypotheses, quantitative research tests hypotheses.

Listen: that is totally buying into the hegemony of quantitative methods by relegating qualitative methods to an auxiliary role with no authority.

Let’s accept that hegemony as an assumption for a second, just to see where it goes. All authority comes from a quantitatively supported judgment. This includes the assessment of the qualitative researcher.

We might ask, “Where are the missing scientists?” about qualitative research, if it is to have any authority at all, even in its auxiliary role.

What would Bruno Latour do?

We could locate the missing scientists in the technological artifacts that qualitative researchers engage with. The missing scientists may lie within the computer assisted qualitative data analysis software, which dutifully treats qualitative data as numbers, and tests the data experimentally and in a controlled way. The user interface is the software’s experimental instrument, through which it elicits “qualitative” judgments from its users. Of course, to the software, the qualitative judgments are quantitative data about the cognitive systems of the software’s users, black boxes that nevertheless have a mysterious regularity to them. The better the coding of the qualitative data, the better the mysteries of the black box users are consolidated into regularities. From the perspective of the computer assisted qualitative data analysis software, the whole world, including its users, is quantitative. By delegating quantitative effort to this software, we conserve the total mass of science in the universe. The missing mass is in the software. Or, maybe, in the visual system of the qualitative researchers, which performs non-parametric statistical inference on the available sensory data as delivered by photo-transmitters in the eye.

I’m sorry. I have to stop. Did you enjoy that? Did my Latourian analysis convince you of the primacy or at least irreducibility of the quantitative element within the social sciences?

I have a confession. Everything I’ve ever read by Latour smells like bullshit to me. If writing that here and now means I will never be employed in a university, then may God have mercy on the soul of academe, because its mind is rotten and its body dissolute. He is obviously a brilliant man but as far as I can tell nothing he writes is true. That said, if you are inclined to disagree, I challenge you to refute my Latourian analysis above, else weep before the might of quantification, which will forever dominate the process of inquiry, if not in man, then in our robot overlords and the unconscious neurological processes that prefigure them.

This is all absurd, of course. Simultaneously accepting the hegemony of quantitative methods
and Latourian analysis has provided us with a reductio ad absurdum that compels us to negate some assumptions. If we discard Latourian analysis, then our quantitative “hegemony” dissolves as more and more quantitative work is performed by unthinking technology. All research becomes qualitative, a scholarly consideration of the poetry outputted by our software and instruments.

Nope, that’s not it either. Because somebody is building that software and those instruments, and that requires generalizability of knowledge, which so far qualitative methods have given up a precise claim to.

I’m going to skip some steps and cut to the chase:

I think the quantitative/qualitative distinction in social scientific research, and in research in general, is dumb.

I think researchers should recognize the fungibility of quantity and quality in text and other kinds of data. I think ethnographers and statistical learning theorists should warmly embrace each other and experience the bliss that is finding ones complement.

Goodnight.