Tag: interpretivism

How to tell the story about why stories don’t matter

I’m thinking of taking this seminar because I’m running into the problem it addresses: how do you pick a theoretical lens for academic writing?

This is related to a conversation I’ve found myself in repeatedly over the past weeks. A friend who studied Rhetoric insists that the narrative and framing of history is more important than the events and facts. A philosopher friend minimizes the historical impact of increased volumes of “raw footage”, because ultimately it’s the framing that will matter.

Yesterday I had the privilege of attending Techraking III, a conference put on by the Center for Investigative Reporting with the generous support and presence of Google. It was a conference about data journalism. The popular sentiment within the conference was that data doesn’t matter unless it’s told with a story, a framing.

I find this troubling because while I pay attention to this world and the way it frames itself, I also read the tech biz press carefully, and it tells a very different narrative. Data is worth billions of dollars. Even data exhaust, the data fumes that come from your information processing factory, can be recycled into valuable insights. Data is there to be mined for value. And if you are particularly genius at it, you can build an expert system that acts on the data without needing interpretation. You build an information processing machine that acts according to mechanical principles that approximate statistical laws, and these machines are powerful.

As social scientists realize they need to be data scientists, and journalists realize they need to be data journalists, there seems to be in practice a tacit admission of the data-driven counter-narrative. This tacit approval is contradicted by the explicit rhetoric that glorifies interpretation and narrative over data.

This is an interesting kind of contradiction, as it takes place as much in the psyche of the data scientist as anywhere else. It’s like the mouth doesn’t know what the hand is doing. This is entirely possible since our minds aren’t actually that coherent to start with. But it does make the process of collaboratively interacting with others in the data science field super complicated.

All this comes to a head when the data we are talking about isn’t something simple like sensor data about the weather but rather is something like text, which is both data and narrative simulatenously. We intuitively see the potential of treating narrative as something to be treated mechanically, statistically. We certainly see the effects of this in our daily lives. This is what the most powerful organizations in the world do all the time.

The irony is that the interpretivists, who are so quick to deny technological determinism, are the ones who are most vulnerable to being blindsided by “what technology wants.” Humanities departments are being slowly phased out, their funding cut. Why? Do they have an explanation for this? If interpetation/framing were as efficacious as they claim, they would be philosopher kings. So their sociopolitical situation contradicts their own rhetoric and ideology. Meanwhile, journalists who would like to believe that it’s the story that matters are, for the sake of job security, being corralled into classes to learn CSS, the programming language that determines, mechanically, the logic of formatting and presentation.

Sadly, neither mechanists nor interpretivists have much of an interest in engaging this contradiction. This is because interpretivists chase funding by reinforcing the narrative that they are critically important, and the work of mechanists speaks for itself in corporate accounting (an uninterpretive field) without explanation. So this contradiction falls mainly into the laps of those coordinating interaction between tribes. Managers who need to communicate between engineering and marketing. University administrators who have to juggle the interests of humanities and sciences. The leadership of investigative reporting non-profits who need to justify themselves to savvy foundations and who are removed enough from particular skillsets to be flexible.

Mechnanized information processing is becoming the new epistemic center. (Forgive me:) the Google supercomputer approximating statistics has replaced Kantian trancendental reason as the grounds for bourgious understanding of the world. This is threatening, of course, to the plurality of perspectives that do not themselves internalize the logic of machine learning. Where machine intelligence has succeeded, then, it has been by juggling this multitude of perspectives (and frames) through automated, data-driven processes. Machine intelligence is not comprehensible to lay interpretivism. Interestingly, lay interpetivism isn’t comprehensible yet to machine intelligence–natural language processing has not yet advanced so far. It treats our communications like we treat ants in an ant farm: a blooming buzzing confusion of arbitrary quanta, fascinatingly complex for its patterns that we cannot see. And when it makes mistakes–and it does often–we feel its effects as a structural force beyond our control. A change in the user interface of Facebook that suddenly exposes drunken college photos to employers and abusive ex-lovers.

What theoretical frame is adequate to tell this story, the story that’s determining the shape of knowledge today? For Lyotard, the postmodern condition is one in which metanarratives about the organization of knowledge collapse and leave only politics, power, and language games. The postmodern condition has gotten us into our present condition: industrial machine intelligence presiding over interpretivists battling in paralogical language games. When the interpretivists strike back, it looks like hipsters or Weird Twitter–paralogy as a subculture of resistance that can’t even acknowledge its own role as resistance for fear of recuperation.

We need a new metanarrative to get out of this mess. But what kind of theory could possibly satisfy all these constituents?

several words, all in a row, some about numbers

I am getting increasingly bewildered by number of different paradigms available in academic research. Naively, I had thought I had a pretty good handle on this sort of thing coming into it. After trying to tackle the subject head on this semester, I feel like my head will explode.

I’m going to try to break down the options.

  • Nobody likes positivism, which went out of style when Wittgenstein refuted his own Tractatus.
  • Postpositivists say, “Sure, there isn’t really observer-independent inquiry, but we can still approximate that through rigorous methods.” The goal is an accurate description of the subject matter. I suppose this fits into a vision of science being about prediction and control of the environment, so generalizability of results would be considered important. I’d argue that this is also consistent with American pragmatism. I think “postpositivist” is a terrible name and would rather talk/think about pragmatism.
  • Interpretivism, which seems to be a more fashionable term than antipositivism, is associated with Weber and Frankfurt school thinkers, as well as a feminist critique. The goal is for one reader (or scholarly community?) to understand another. “Understanding” here is understood intersubjectively–“I get you”. Interpretivists are skeptical of prediction and control as provided by a causal understanding. At times, this skepticism is expressed as a belief that causal understanding (of people) is impossible; other times it is expressed as a belief that causal understanding is nefarious.

Both teams share a common intellectual ancestor in Immanuel Kant, who few people bother to read.

Habermas has room in his overarching theory for multiple kinds of inquiry–technical, intersubjective, and emancipatory/dramaturgical–but winds up getting mobilized by the interpretivists. I suspect this is the case because research aimed at prediction and control is better funded, because it is more instrumental to power. And if you’ve got funding there’s little incentive to look to Habermas for validation.

It’s worth noting that mathematicians still basically run their own game. You can’t beat pure reason at the research game. Much computer science research falls into this category. Pragmatists will take advantage of mathematical reasoning. I think interpretivists find mathematics a bit threatening because it seems like the only way to “interpet” mathematicians is by learning the math that they are talking about. When intersubjective understanding requires understanding verbatim, that suggests the subject matter is more objectively true than not.

The gradual expansion of computer science towards the social science through “big data” analysis can be seen as a gradual expansion of what can be considered under mathematical closure.

Physicists still want to mathematize their descriptions of the universe. Some psychologists want to mathematize their descriptions. Some political scientists, sociologists, etc. want to mathematize their descriptions. Anthropologists don’t want to mathematize their descriptions. Mathematization is at the heart of the quantitative/qualitative dispute.

It’s worth noting that there are non-mathematized predictive theories, as well as mathematized theories that pretty much fail to predict anything.