Begin insomniac academic blogging:
Dave Lester has explained his strategy in graduate school as “living agily”, a reference to agile software development.
In trying to navigate the academic world, I find myself sniffing the air through conversations, email exchanges, tweets. Since this feels like part of my full time job, I have been approaching the task with gusto and believe I am learning rapidly.
Intellectual fashions shift quickly. A year ago I first heard the term “digital humanities”. At the time, it appeared to be controversial but on the rise. Now, it seems like something people are either disillusioned with or pissed about. (What’s this based on? A couple conversations this week, a few tweets. Is that sufficient grounds to reify a ‘trend’?)
I have no dog in that race yet. I can’t claim to understand what “digital humanities” means. But from what I gather, it represents a serious attempt to approach text in its quantitative/qualitative duality.
It seems that such a research program would: (a) fall short of traditional humanities methods at first, due to the primitive nature of the tools available, (b) become more insightful as the tools develop, and so (c) be both disgusting and threatening to humanities scholars who would prefer that their industry not be disrupted.
I was reminded through an exchange with some Facebook Marxists that Hegel wrote about the relationship between the quantitative and the qualitative. I forget if quantity was a moment in transition to quality, or the other way around, or if they bear some mutual relationship, for Hegel.
I’m both exhausted about and excited that in order to understand the evolution of the environment I’m in, and make strategic choices about how to apply myself, I have to (re?)read some Hegel. I believe the relevant sections are this and this from his Science of Logic.
This just in! Information about why people are outraged by digital humanities!
There we have it. Confirmation that outrage at digital humanities is against the funding of research based on the assumption that “that formal characteristics of a text may also be of importance in calling a fictional text literary or non-literary, and good or bad”–i.e., that some aspects of literary quality may be reducible to quantitative properties of the text.
A lot of progress has been made in psychology by assuming that psychological properties–manifestly qualitative–supervene on quantitatively articulated properties of physical reality. The study of neurocomputation, for example, depends on this. This leads to all sorts of cool new technology, like prosthetic limbs and hearing aids and combat drones controlled by dreaming children (potentially).
So, is it safe to say that if you’re against digital humanities, you are against the unremitting march of technical progress? I suppose I could see why one would be, but I think that’s something we have to take a gamble on, steering it as we go.
In related news, I am getting a lot out of my course on statistical learning theory. Looking up something I wanted to include in this post just now about what I’ve been learning, I found this funny picture:
One thing that’s great about this picture is how it makes explicit how, in a model of the mind adopted by statistical cognitive science theorists, The World is understood by us through a mentally internal Estimator whose parameters are strictly speaking quantitative. They are quantitative because they are posited to instantiate certain algorithms, such as those derived by statistical learning theorists. These algorithmic functions presumably supervene on a neurocomputational substrate.
But that’s a digression. What I wanted to say is how exciting belief propagation algorithms for computing marginal probabilities on probabilistic graphical models are!
What’s exciting about them is the promise they hold for the convergence of opinion onto correct belief based on a simple algorithm. Each node in a network of variables listens to all of its neighbors. Occasionally (on a schedule whose parameters are free for optimization to context) the node will synthesize the state of all of its neighbors except one, then push that “message” to its neighbor, who is listening…
…and so on, recursively. This algorithm has nice mathematically guaranteed convergence properties when the underlying graph has no cycles. Meaning, the algorithm finds the truth about the marginal probabilities of the nodes in a guaranteed amount of time.
It also has some nice empirically determined properties when the underlying graph has cycles.
The metaphor is loose, at this point. If I could dream my thesis into being at this moment, it would be a theoretical reduction of discourse on the internet (as a special case of discourse in general) to belief propagation on probabilistic graphical models. Ideally, it would have to account for adversarial agents within the system (i.e. it would have to be analyzed for its security properties), and support design recommendations for technology that catalyzed the process.
I think it’s possible. Not done alone, of course, but what projects are ever really undertaken alone?
Would it be good for the world? I’m not sure. Maybe if done right.