Digifesto

Tag: AI

I miss writing

I miss writing.

For most of my formative years, writing was an important activity in my life. I would try to articulate what I cared about, the questions I had, what I was excited for. Often this was done as an intellectual pursuit, something para-academic, a search for answers beyond any curriculum, and without a clear goal in mind.

I wrote both in private and “in public”. Privately, in paper journals and letters to friends, then emails. “In public”, on social media of various forms. LiveJournal was a fantastic host for pseudonymous writing, and writing together and intimately with others was a shared pastime. These were the ‘old internet’ days, when the web was imagined as a frontier and creative space.

The character of writing changed as the Web, and its users, matured. Facebook expanded from its original base of students to include, well, everybody. Twitter had its “Weird Twitter” moment, which then passed as it became an increasingly transactional platform. Every public on-line space became a LinkedIn.

Instagram is, for a writer, depressing. Extremely popular and influential, but with hardly any written ideas, by design. YouTube can accommodate writing, but only with the added production value of performance, recording, and so on. It’s a thicker medium that makes writing, per se, seem small, or shallow.

The Web made it clear that all text is also data. Writing is a human act, full of meaning, but digital text, the visible trace of that act, is numerical. Text that is published online is at best the body of a message with so much other metadata. The provenance is, when the system is working, explicit. The message is both intrinsically numerical — encoded in bits of information — and extrinsically numerical: its impact, reach, engagement, is scored; it is indexed and served as a result to queries based on its triangulated position in the vast space of all possible text.

Every day we are inundated with content, and within the silicon cage of our historical moment, people labor to produce new content, to elevate their place in the system of ranks and numbers. I remember, long ago, I would write for an imagined audience. Sometimes this audience was only myself. This was writing. I think writing is rarer now. Many people create content and the audience for that content is the system of ranks and numbers. I miss writing.

Generative Artificial Intelligence is another phase of this evolution. Text is encoded, intrinsically, into bits. Content is sorted, extrinsically, into tables of impact, engagement, conversion, relevance, and so on. But there had been a mystery, still, about the way words were put together, and what they meant when they came in this or that order.

That mystery has been solved, they say. See, my looking at all the text at once, each word or phrase can be seen as a vector in a high dimensional space, relative to all other texts. The meaning of the text is where it is in that space, as understood by a silent, mechanical observer of all texts available. Relative to a large language model, the language in one person’s mind is small, shallow.

Generative AI now excels at creating content. The silicon cage may soon start evicting its prisoners. Their labor isn’t needed any more. The numbers can take care of themselves.

I remember how to write. I now realize that what is special about writing is not that it produces text — this is now easily done by machines. It is that it is a human act that transforms the human as writer.

I wonder for how long humanity will remember how to read.

dreams of reason

Begin insomniac academic blogging:

Dave Lester has explained his strategy in graduate school as “living agily”, a reference to agile software development.

In trying to navigate the academic world, I find myself sniffing the air through conversations, email exchanges, tweets. Since this feels like part of my full time job, I have been approaching the task with gusto and believe I am learning rapidly.

Intellectual fashions shift quickly. A year ago I first heard the term “digital humanities”. At the time, it appeared to be controversial but on the rise. Now, it seems like something people are either disillusioned with or pissed about. (What’s this based on? A couple conversations this week, a few tweets. Is that sufficient grounds to reify a ‘trend’?)

I have no dog in that race yet. I can’t claim to understand what “digital humanities” means. But from what I gather, it represents a serious attempt to approach text in its quantitative/qualitative duality.

It seems that such a research program would: (a) fall short of traditional humanities methods at first, due to the primitive nature of the tools available, (b) become more insightful as the tools develop, and so (c) be both disgusting and threatening to humanities scholars who would prefer that their industry not be disrupted.

I was reminded through an exchange with some Facebook Marxists that Hegel wrote about the relationship between the quantitative and the qualitative. I forget if quantity was a moment in transition to quality, or the other way around, or if they bear some mutual relationship, for Hegel.

I’m both exhausted about and excited that in order to understand the evolution of the environment I’m in, and make strategic choices about how to apply myself, I have to (re?)read some Hegel. I believe the relevant sections are this and this from his Science of Logic.

This just in! Information about why people are outraged by digital humanities!

There we have it. Confirmation that outrage at digital humanities is against the funding of research based on the assumption that “that formal characteristics of a text may also be of importance in calling a fictional text literary or non-literary, and good or bad”–i.e., that some aspects of literary quality may be reducible to quantitative properties of the text.

A lot of progress has been made in psychology by assuming that psychological properties–manifestly qualitative–supervene on quantitatively articulated properties of physical reality. The study of neurocomputation, for example, depends on this. This leads to all sorts of cool new technology, like prosthetic limbs and hearing aids and combat drones controlled by dreaming children (potentially).

So, is it safe to say that if you’re against digital humanities, you are against the unremitting march of technical progress? I suppose I could see why one would be, but I think that’s something we have to take a gamble on, steering it as we go.

In related news, I am getting a lot out of my course on statistical learning theory. Looking up something I wanted to include in this post just now about what I’ve been learning, I found this funny picture:

One thing that’s great about this picture is how it makes explicit how, in a model of the mind adopted by statistical cognitive science theorists, The World is understood by us through a mentally internal Estimator whose parameters are strictly speaking quantitative. They are quantitative because they are posited to instantiate certain algorithms, such as those derived by statistical learning theorists. These algorithmic functions presumably supervene on a neurocomputational substrate.

But that’s a digression. What I wanted to say is how exciting belief propagation algorithms for computing marginal probabilities on probabilistic graphical models are!

What’s exciting about them is the promise they hold for the convergence of opinion onto correct belief based on a simple algorithm. Each node in a network of variables listens to all of its neighbors. Occasionally (on a schedule whose parameters are free for optimization to context) the node will synthesize the state of all of its neighbors except one, then push that “message” to its neighbor, who is listening…

…and so on, recursively. This algorithm has nice mathematically guaranteed convergence properties when the underlying graph has no cycles. Meaning, the algorithm finds the truth about the marginal probabilities of the nodes in a guaranteed amount of time.

It also has some nice empirically determined properties when the underlying graph has cycles.

The metaphor is loose, at this point. If I could dream my thesis into being at this moment, it would be a theoretical reduction of discourse on the internet (as a special case of discourse in general) to belief propagation on probabilistic graphical models. Ideally, it would have to account for adversarial agents within the system (i.e. it would have to be analyzed for its security properties), and support design recommendations for technology that catalyzed the process.

I think it’s possible. Not done alone, of course, but what projects are ever really undertaken alone?

Would it be good for the world? I’m not sure. Maybe if done right.