Digifesto

Tag: hegel

Alain Badiou and artificial intelligence

Last week I saw Alain Badiou speak at NYU on “Philosophy between Mathematics and Poetry”, followed by a comment by Alexander Galloway, and then questions fielded from the audience.

It was wonderful to see Badiou speak as ever since I’ve become acquainted with his work (which was rather recently, Summer of 2016) I have seen it as a very hopeful direction for philosophy. As perhaps implied by the title of his talk, Badiou takes mathematics very seriously, perhaps more seriously than most mathematicians, and this distinguishes him from many other philosophers for whom mathematics is somewhat of an embarrassment. There are few fields more intellectually rarified than mathematics, philosophy, and poetry, and yet somehow Badiou treats each fairly in a way that reflects how broader disciplinary and cultural divisions between the humanities and technical fields may be reconciled. (This connects to some of my work on Philosophy of Computational Social Science)

I have written a bit recently about existentialism in design only to falter at the actual definition of existentialism. While it would I’m sure be incorrect to describe Badiou as an existentialist, there’s no doubt that he represents the great so-called Continental philosophical tradition, is familiar with Heidegger and Nietzsche, and so on. I see certain substantive resonances between Badiou and other existentialist writers, though I think to make the comparison now would be putting the cart before the horse.

Badiou’s position, in a nutshell, is like this:

Mathematics is a purely demonstrative form of writing and thinking. It communicates by proof, and has a special kind of audience to it. It is a science. In particular it is a science of all the possible forms of multiplicity, which is the same thing as saying as it is the science of all being, or ontology.

Poetry, on the other hand, is not about being but rather about becoming. “Becoming” for Badiou is subjective: the conscious subject encounters something new, experiences a change, sees an unrealized potential. These are events, and perhaps the greatest contribution of Badiou is his formulation and emphasis on the event as a category. In reference to earlier works, the event might be when through Hegelian dialectic a category is sublated. It could also perhaps correspond to when existence overcomes being in de Beauvoir’s ethics (hence the connection to existentialism I’m proposing). Good poetry, in Badiou’s thought, shows how the things we experience can break out of the structures that objectify them, turning the (subjectively perceived) impossible into a new reality.

Poetry, perhaps because it is connected to realizing the impossible but perhaps just because it’s nice to listen to (I’m unclear on Badiou’s position on this point) is “seductive”, encouraging psychological connections to the speaker (such as transference) whether or not it’s “true”. Classically, poetry meant epic poems and tragic theater. It could be cinema today.

Philosophy has the problem that it has historically tried to be both demonstrative, like mathematics, and seductive, like poetry. It’s this impurity or tension that defines it. Philosophers need to know mathematics because it is ontology, but have to go beyond mathematics because their mission is to create events in subjectively experienced reality, which is historically situated, and therefore not merely a matter of mathematical abstraction. Philosophers are in the business of creating new forms of subjectivity, which is not the same as creating a new form of being.

I’m fine with all this.

Galloway made some comments I’m somewhat skeptical of, though I may not have understood them since he seems to build mostly on Deleuze and Lacan, who are two intellectual sources I’ve never gotten into. But Galloway’s idea is to draw a connection between the “digital”, with all of its connections to computing technology, algorithms, the Internet, etc., with Badiou’s understanding of the mathematical, and to connect the “analog”, which is not discretized like the digital, to poetry. He suggested that Badiou’s sense of mathematics was arithmetic and excluded the geometric.

I take this interpretation of Galloway’s as clever, but incorrect and uncharitable. It’s clever because it co-opts a great thinker’s work into the sociopolitical agenda of trying to bolster the cultural capital of the humanities against the erosion of algorithmic curation and diminution relative to the fortunes of technology industries. This has been the agenda of professional humanists for a long time and it is annoying (to me) but I suppose necessary for the maintenance of the humanities, which are important.

However, I believe the interpretation is incorrect and uncharitable to Badiou because though Badiou’s paradigmatic example of mathematics is set theory, he seems to have a solid enough grasp of Kurt Godel’s main points to understand that mathematics includes the great variety of axiomatic systems and these, absolutely, indisputably, include geometry and real analysis and all the rest. The fact that logical proof is a discrete process which can be reduced to and from Boolean logic and automated in an electric circuit is, of course, the foundational science of computation that we owe to Turing, Church, Von Neumann, and others. It’s for these reasons that the potential of computation is so impressive and imposing: it potentially represents all possible forms of being. There are no limits to AI, at least none based on these mathematical foundations.

There were a number of good questions from the audience which led Badiou to clarify his position. The Real is relational, it is for a subject. This distinguishes it from Being, which is never relational (though of course, there are mathematical theories of relations, and this would seem to be a contradiction in Badiou’s thought?) He acknowledges that a difficult question is the part of Being in the the real.

Meanwhile, the Subject is always the result of an event.

Physics is a science of the existing form of the real, as opposed to the possible forms. Mathematics describes the possible forms of what exists. So empirical science can discover which mathematical form is the one that exists for us.

Another member of the audience asked about the impossibility of communism, which was on point because Badiou has at times defended communism or argued that the purpose of philosophy is to bring about communism. He made the point that one could not mathematically disprove the possibility of communism.

The real question, I may be so bold as to comment afterwards, is whether communism can exist in our reality. Suppose that economics is like physics in that it is a science of the real as it exists for us. What if economics shows that communism is impossible in our reality?

Though it wasn’t quite made explicitly, here is the subtle point of departure Badiou makes from what is otherwise conventionally unobjectionable. He would argue, I believe, that the purpose of philosophy is to create a new subjective reality where the impossible is made real, and he doesn’t see this process as necessarily bounded by, say, physics in its current manifestation. There is the possibiliity of a new event, and of seizing that event, through, for example, poetry. This is the article of faith in philosophy, and in poets, that has established them as the last bastion against dehumanization, objectification, reification, and the dangers of technique and technology since at least Heidegger’s Question Concerning Technology.

Which circles us back to the productive question: how would we design a technology that furthers this objective of creating new subjective realities, new events? This is what I’m after.

Ascendency and overhead in networked ecosystems

Ulanowicz (2000) proposes in information-theoretic terms several metrics for ecosystem health, where one models an ecosystem as a for example a trophic network. Principal among them ascendancy , which is a measure of the extent to which energy flows in the system are predictably structured weighted by the total energy of the system. He believes that systems tend towards greater ascendancy in expectation, and that this is predictive of ecological ‘succession’ (and to some extent ecological fitness). On the other hand, overhead, which is unpredictability (perhaps, inefficiency) in energy flows (“free energy”?), are important for the system’s resiliency towards external shocks.
ascendency
At least in the papers I’ve read so far, Ulanowicz is not mathematically specific about the mechanism that leads to greater ascendancy, though he sketches some explanations. Autocatalytic cycles within the network reinforce their own positive perturbations and mutations, drawing in resources from external sources, crowding out and competing with them. These cycles become agents in themselves, exerting what Ulanwicz suggests is Aristotelian final or formal causal power on the lower level components. In this way, freely floating energy is drawn into structures of increasing magnificence and complexity.

I’m reminded on Bataille’s The Accursed Share, in which he attempts to account for societal differences and the arc of human history through the use of its excess energy. “The sexual act is in time what the tiger is in space,” he says, insightfully. The tiger, as an apex predator, is flame that clings brilliantly to the less glamorous ecosystem that supports it. That is why we adore them. And yet, their existence is fragile, as it depends on both the efficiency and stability of the rest of its network. When its environment is disturbed, it is the first to suffer.
space tiger
Ulanowicz cites himself suggesting that a similar framework could be used to analyze computer networks. I have not read his account yet, though I anticipate several difficulties. He suggests that data flows in a computer network are analogous to energy flows within an ecosystem. That has intuitive appeal, but obscures the fact that some data is more valuable than others. A better analogy might be money as a substitute for energy. Or maybe there is a way to reduce both to a common currency, at least for modeling purposes.

Econophysics has been gaining steam, albeit controversially. Without knowing anything about it but based just on statistical hunches, I suspect that this comes down to using more complex models on the super duper complex phenomenon of the economy, and demonstrating their success there. In other words, I’m just guessing that the success of econophysics modeling is due to the greater degrees of freedom in the physics models compared to non-dynamic, structural equilibrium models. However, since ecology models the evolutionary dynamics of multiple competing agents (and systems of those agents), its possible that those models could capture quite a bit of what’s really going on and even be a source of strategic insight.

Indeed, economics already has a sense of stable versus unstable equilibria that resonate with the idea of stability of ecological succession. These ideas translate into game theoretic analysis as well. As we do more work with Strategic Bayesian Networks or other constructs to model equilibrium strategies in a networked, multi-agent system, I wonder if we can reproduce Ulanowicz’s results and use his ideas about ascendancy (which, I’ve got to say, are extraordinary and profound) to provide insight into the information economy.

I think that will require translating the ecosystem modeling into Judea Pearl’s framework for causal reasoning. Having been indoctrinated in Pearl’s framework in much of my training, I believe that it is general enough to subsume Ulanowicz’s results. But I have some doubt. In some of his later writings Ulanowicz refers explicitly to a “Hegelian dialectic” between order and disorder as a consequence of some of his theories, and between that and his insistence on his departure from mechanistic thinking over the course of his long career, I am worried that he may have transcended what it’s possible to do even with the modeling power of Bayesian networks. The question is: what then? It may be that once one’s work sublimates beyond our ability to model explicitly and intervene strategically, it becomes irrelevant. (I get the sense that in academia, Ulanwicz’s scientific philosophizing is a privilege reserved for someone tenured who late in their career is free to make his peace with the world in their own way) But reading his papers is so exhilarating to me. I’ve had no prior exposure to ecology before this, so his papers are packed with fresh ideas. So while I don’t know how to justify it to any of my mentors or colleagues, I think I just have to keep diving into it when I can, on the side.

dreams of reason

Begin insomniac academic blogging:

Dave Lester has explained his strategy in graduate school as “living agily”, a reference to agile software development.

In trying to navigate the academic world, I find myself sniffing the air through conversations, email exchanges, tweets. Since this feels like part of my full time job, I have been approaching the task with gusto and believe I am learning rapidly.

Intellectual fashions shift quickly. A year ago I first heard the term “digital humanities”. At the time, it appeared to be controversial but on the rise. Now, it seems like something people are either disillusioned with or pissed about. (What’s this based on? A couple conversations this week, a few tweets. Is that sufficient grounds to reify a ‘trend’?)

I have no dog in that race yet. I can’t claim to understand what “digital humanities” means. But from what I gather, it represents a serious attempt to approach text in its quantitative/qualitative duality.

It seems that such a research program would: (a) fall short of traditional humanities methods at first, due to the primitive nature of the tools available, (b) become more insightful as the tools develop, and so (c) be both disgusting and threatening to humanities scholars who would prefer that their industry not be disrupted.

I was reminded through an exchange with some Facebook Marxists that Hegel wrote about the relationship between the quantitative and the qualitative. I forget if quantity was a moment in transition to quality, or the other way around, or if they bear some mutual relationship, for Hegel.

I’m both exhausted about and excited that in order to understand the evolution of the environment I’m in, and make strategic choices about how to apply myself, I have to (re?)read some Hegel. I believe the relevant sections are this and this from his Science of Logic.

This just in! Information about why people are outraged by digital humanities!

There we have it. Confirmation that outrage at digital humanities is against the funding of research based on the assumption that “that formal characteristics of a text may also be of importance in calling a fictional text literary or non-literary, and good or bad”–i.e., that some aspects of literary quality may be reducible to quantitative properties of the text.

A lot of progress has been made in psychology by assuming that psychological properties–manifestly qualitative–supervene on quantitatively articulated properties of physical reality. The study of neurocomputation, for example, depends on this. This leads to all sorts of cool new technology, like prosthetic limbs and hearing aids and combat drones controlled by dreaming children (potentially).

So, is it safe to say that if you’re against digital humanities, you are against the unremitting march of technical progress? I suppose I could see why one would be, but I think that’s something we have to take a gamble on, steering it as we go.

In related news, I am getting a lot out of my course on statistical learning theory. Looking up something I wanted to include in this post just now about what I’ve been learning, I found this funny picture:

One thing that’s great about this picture is how it makes explicit how, in a model of the mind adopted by statistical cognitive science theorists, The World is understood by us through a mentally internal Estimator whose parameters are strictly speaking quantitative. They are quantitative because they are posited to instantiate certain algorithms, such as those derived by statistical learning theorists. These algorithmic functions presumably supervene on a neurocomputational substrate.

But that’s a digression. What I wanted to say is how exciting belief propagation algorithms for computing marginal probabilities on probabilistic graphical models are!

What’s exciting about them is the promise they hold for the convergence of opinion onto correct belief based on a simple algorithm. Each node in a network of variables listens to all of its neighbors. Occasionally (on a schedule whose parameters are free for optimization to context) the node will synthesize the state of all of its neighbors except one, then push that “message” to its neighbor, who is listening…

…and so on, recursively. This algorithm has nice mathematically guaranteed convergence properties when the underlying graph has no cycles. Meaning, the algorithm finds the truth about the marginal probabilities of the nodes in a guaranteed amount of time.

It also has some nice empirically determined properties when the underlying graph has cycles.

The metaphor is loose, at this point. If I could dream my thesis into being at this moment, it would be a theoretical reduction of discourse on the internet (as a special case of discourse in general) to belief propagation on probabilistic graphical models. Ideally, it would have to account for adversarial agents within the system (i.e. it would have to be analyzed for its security properties), and support design recommendations for technology that catalyzed the process.

I think it’s possible. Not done alone, of course, but what projects are ever really undertaken alone?

Would it be good for the world? I’m not sure. Maybe if done right.