Digifesto

Category: academia

One Magisterium: a review (part 1)

I have come upon a remarkable book, titled One Magisterium: How Nature Knows Through Us, by Seán Ó Nualláin, President, University of Ireland, California. It is dedicated “To all working at the edges of society in an uncompromising search for truth and justice.” It’s acknowledgement section opens:

Kenyan middle-distance runners were famous for running like “scared rabbits”: going straight to the head of the field and staying there, come what may. Even more than was the case for my other books, I wrote this like a scared rabbit.”

Ó Nualláin is a recognizable face at UC Berkeley though I think it’s fair to say that most of the faculty and PhD students couldn’t tell you who he is. To a mainstream academic, he is one of the nebulous class of people who show up to events. One glorious loophole of university culture is that the riches of intellectual communion are often made available in open seminars held by people so weary of obscurity that they are happy for any warm body that cares enough to attend. This condition combined with the city of Berkeley’s accommodating attitude towards quacks and vagrants adds flavor to the university’s intellectual character.

There is of course no campus for the University of Ireland, California. Ó Nualláin is a truly independent scholar. Unlike many more unfortunate intellectuals, he has made the brilliant decision to not quit his day job, which is as a musician. A Google inquiry into the man indicates he probably got his PhD from Dublin City University and spent a good deal of time around Stanford’s Symbolic Systems department. (EDIT: Sean has corrected me on the details of his accomplished biography in the comments.)

I got on his mailing lists some time ago because of my interest in the Foundations of Mind conference, which he runs in Berkeley. Later, I was impressed by his aggressive volley of questions when Nick Bostrom spoke at Berkeley (I’ve become familiar with Bostrom’s work through MIRI (formerly SingInst). I’ve spoken to him just a couple times, once at a poster session at the Berkeley Institute of Data Science and once at Katy Huff’s scientific technology practice group, The Hacker Within.

I’m providing these details out of what you might call anthropological interest. At the School of Information I’ve somehow caught the bug of Science and Technology Studies by osmosis. Now I work for Charlotte Cabasse on her ethnographic team, despite believing myself to be a computational social scientist. This qualitative work is a wonderful excuse to write about ones experiences.

My perceptions of Ó Nualláin are relevant, then, because they situate the author of One Magisterium as an outsider to the academic mainstream at Berkeley. This outsider status comes through quite heavily in the book, starting from the Acknowledgments section (which recognizes all the service staff at the bars and coffee shops where he wrote the book) and running as a regular theme throughout. Discontent with and rejection from academia-as-usual are articulated in sublimated form as harsh critique of the academic institution. Ó Nualláin is engaged in an “uncompromising search for truth and justice,” and the university as it exists today demands too many compromises.

Magisterium is a Catholic term for a teaching authority. One Magisterium refers to the book’s ambition of pointing to a singular teaching authority, a new one heretofore unrecognized by other teaching authorities such as mainstream universities. Hence the book is an attack on other sources of intellectual authority. An example passage:

The devastating news for any reader venturing a toe into the stormy waters of this book is that its writer’s view is that we may never be able to dignify the moral, epistemological and political miasma of the early twenty-first century with terms like “crisis” for which the appropriate solution is of course a “paradigm shift”. It may simply be a set of hideously interconnected messes; epistemological and administrative in the academy, institutional and moral in the greater society. As a consequence, the landscape of possible “solutions” may seem so unconstrained that the wisdom of Joe the barman may be seen to equal that of any series of tomes, no matter how well-researched.

This book is above all an attempt to unify the plurality of discourses — scientific, religious, moral, aesthetic, and so on — that obtain at the start of the third millenium.

An anthropologist of science might observe that this criticality-of-everything, coupled with the claim to have a unifying theory of everything, is a surefire way to get ignored by the academy. The incentive structure of the academy requires specialization and a political balance of ideas. If somebody were to show up with the right idea, it would discredit a lot of otherwise important people and put others out of a job.

The problem, or one of them (there are many mentioned in the first chapter of One Magisterium, titled “The Trouble with Everything”), is that Ó Nualláin is right. At least as far as I can tell at this point. It is not an easy book to read; it is not structured linearly so much as (I imagine, not knowing what I’m talking about) like complex Irish dancing music, with motifs repeated and encircling themselves like a double helix or perhaps some more complex structure. Threaded together are topics from Quantum Mechanics, an analysis of the anthropic principle, a critique of Dawkins’ atheism and a positioning of the relevance of Vedanta theology to understanding physical reality, and an account of the proper role of the arts in society. I suspect that the book is meant to unfold on ones psychology slowly, resulting in ones adoption of what Ó Nualláin calls bionoetics, the new united worldview that is the alleged solution to everything.

A key principle of bionoetics is the recognition of what Ó Nualláin calls the “noetic” level of description, which is distinct from the “cognitive” third-person stance in that it is compressed in a way that makes it relevant to action in any particular domain of inquiry. Most of what he describes as “noetic” I read as “phenomenological”. I wonder if Ó Nualláin has read Merleau-Ponty–he uses the Husserlian critique of “psychologism” extensively.

I think it’s immaterial whether “noetic” is an appropriate neologism for this blending of the first-personal experience into the magisterium. Indeed, there is something comforting to a hard-headed scientist about Ó Nualláin’s views: contrary to the contemporary anthropological view, this first-personal knowledge has no place in academic science; it’s place is art. Having been in enough seminars at the School of Information where anthropologists lament not being taken seriously as producing knowledge comparable to that of the Scientists, and being one who appreciates the value of Art without needing it to be Science, I find something intuitively appealing about this view. Nevertheless, one wonders if the epistemic foundation of Ó Nualláin’s critique of the academy is grounded in scientific inquiry or his own and others first-personal noetic experiences coupled with observations of who is “successful” in scientific fields.

Just one chapter into One Magisterium, I have to say I’m impressed with it in a very specific way. Some of us learn about the world with a synthetic mind, searching for the truth with as few constraints on ones inquiry as possible. Indeed, that’s how I wound up at as nebulous place as the School of Information at Berkeley. As one conducts the search, one finds oneself increasingly isolated. Some truths may never be spoken, and it’s never appropriate to say all the truths at once. This is especially true in an academic context, where it is paramount for the reputation of the institution that everyone avoid intellectual embarrassment whenever possible. So we make compromises, contenting ourselves with minute and politically palatable expertise.

I am deeply impressed that Ó Nualláin has decided to fuck all and tell it like it is.

Fascinated by Vijay Narayanan’s talk at #DataEDGE

As I write this I’m watching Vijay Narayanan’s, Director of Algorithms and Data Science Solutions at Microsoft, talk at the DataEDGE conference at UC Berkeley.

The talk is about “The Data Science Economy.” It began with a history of the evolution of the human centralized nervous system. He then went on to show the centralizing trend of the data economy. Data collection will be become more mobile, data processing will be done in the cloud. This data will be sifted by software and used to power a marketplace of services, which ultimately deliver intelligence to their users.

It was wonderful to see somebody so in the know reaffirming what has been a suspicion I’ve had since starting graduate school but have found little support for in the academic setting. The suspicion is that what’s needed to accurately model the data science economy is a synthesis of cognitive science and economics that can show the comparative market value and competitiveness of different services.

This is not out of the mainline of information technology, management science, computer science, and other associated disciplines that have been at the nexus of business and academia for 70 years. It’s an intellectual tradition that’s rooted in the 1940’s cybernetics vision of Norbert Wiener and was going strong in the social sciences as late as Beniger‘s The Control Revolution, which, like Narayanan, draws an explicit connection between information processing in the brain and information processing in the microprocessor–notably while acknowledging the intermediary step of bureaucracy as a large-scale information processing system.

There’s significant cross-pollination between engineering, economics, computer science, and cognitive psychology. I’ve read papers from, say, the Education field in the late 80’s and early 90’s that refers to this collectively as “the dominant paradigm”. At UC Berkeley today, it’s fascinating to see a departmental politics play out over ‘data science’ that echoes some of these concerns that a powerful alliance of ideas are getting mobilized by industry and governments while other disciplines are struggling to find relevance.

It’s possible that these specialized disciplinary discourses are important for the cultivation of thought that is important for its insight despite being fundamentally impractical. I’m coming to a different view: that maybe the ‘dominant paradigm’ is dominant because it is scientifically true, and that other disciplinary orientations are suffering because they are based on unsound theory. If disciplines that are ‘dominated’ by another paradigm are floundering because they are, to put it simply, wrong, then that is a very elegant explanation for what’s going on.

The ramification of this is that what’s needed is not a number of alternatives to ‘the dominant paradignm’. What’s needed is that scholars double down on the dominant paradigm and learn how to express in its logic the complexities and nuances that the other disciplines have been designed to capture. What we can hope for, in terms of intellectual continuity, is the preservation of what’s best of older ideas in a creative synthesis with the foundational principles of computer science and mathematical biology.

going post-ideology

I’ve spent a lot of my intellectual life in the grips of ideology.

I’m glad to be getting past all of that. That’s one reason why I am so happy to be part of Glass Bead Labs.

Glass Bead Labs

There are a lot of people who believe that it’s impossible to get beyond ideology. They believe that all knowledge is political and nothing can be known with true clarity.

I’m excited to have an opportunity to try to prove them wrong.

data science and the university

This is by now a familiar line of thought but it has just now struck me with clarity I wanted to jot down.

  1. Code is law, so the full weight of human inquiry should be brought to bear on software system design.
  2. (1) has been understood by “hackers” for years but has only recently been accepted by academics.
  3. (2) is due to disciplinary restrictions within the academy.
  4. (3) is due to the incentive structure of the academy.
  5. Since there are incentive structures for software development that are not available for subjects whose primary research project is writing, the institutional conditions that are best able to support software work and academic writing work are different.
  6. Software is a more precise and efficious way of communicating ideas than writing because its interpretation is guaranteed by programming language semantics.
  7. Because of (6), there is selective pressure to making software the lingua franca of scholarly work.
  8. (7) is inducing a cross-disciplinary paradigm shift in methods.
  9. (9) may induce a paradigm shift in theoretical content, or it may result in science whose contents are tailored to the efficient execution of adaptive systems. (This is not to say that such systems are necessarily atheoretic, just that they are subject to different epistemic considerations).
  10. Institutions are slow to change. That’s what makes them institutions.
  11. By (5), (7), and (9), the role of universities as the center of research is being threatened existentially.
  12. But by (1), the myriad intellectual threads currently housed in universities are necessary for software system design, or are at least potentially important.
  13. With (11) and (12), a priority is figuring out how to manage a transition to software-based scholarship without information loss.

a brief comment on feminist epistemology

One funny thing about having a blog is that I can tell when people are interested in particular posts through the site analytics. To my surprise, this post about Donna Haraway has been getting an increasing number of hits each month since I posted it. That is an indication that it has struck a chord, since steady exogenous growth like that is actually quite rare.

It is just possible that this means that people interested in feminist epistemology have been reading my blog lately. They probably have correctly guessed that I have not been the biggest fan of feminist epistemology because of concerns about bias.

But I’d like to take the opportunity to say that my friend Rachel McKinney has been recommending I read Elizabeth Anderson‘s stuff if I want to really get to know this body of theory. Since Rachel is an actual philosopher and I am an amateur who blogs about it on weekends, I respect her opinion on this a great deal.

So today I started reading through Anderson’s Stanford Encyclopedia of Philosophy article on Feminist Epistemology and I have to say I think it’s very good. I like her treatment of the situated knower. It’s also nice to learn that there are alternative feminist epistemologies to certain standpoint theories that I think are troublesome. In particular, it turns out that those standpoint theories are now considered by feminist philosophers to from a brief period in the 80’s that they’ve moved past already! Now subaltern standpoints are considered privileged in terms of discovery more than privileged in terms of justification.

This position is certainly easier to reconcile with computational methods. For example, it’s in a sense just mathematically mathematically correct if you think about it in terms of information gain from a sample. This principle appears to have been rediscovered in a way recently by the equity-in-data-science people when people talk about potential classifier error.

I’ve got some qualms about the articulation of this learning principle in the absence of a particular inquiry or decision problem because I think there’s still a subtle shift in the argumentation from logos to ethos embedded in there (I’ve been seeing things through the lens of Aristotelian rhetoric lately and it’s been surprisingly illuminating). I’m on the lookout for a concrete application of where this could apply in a technical domain, as opposed to as an articulation of a political affinity or anxiety in the language of algorithms. I’d be grateful for links in the comments.

Edit:

Wait, maybe I already built one. I am not sure if that really counts.

Horkheimer and Wiener

[I began writing this weeks ago and never finished it. I’m posting it here in its unfinished form just because.]

I think I may be condemning myself to irrelevance by reading so many books. But as I make an effort to read up on the foundational literature of today’s major intellectual traditions, I can’t help but be impressed by the richness of their insight. Something has been lost.

I’m currently reading Norbert Wiener’s The Human Use of Human Beings (1950) and Max Horkheimer’s Eclipse of Reason (1947). The former I am reading for the Berkeley School of Information Classics reading group. Norbert Wiener was one of the foundational mathematicians of 20th century information technology, a colleague of Claude Shannon. Out of his own sense of social responsibility, he articulated his predictions for the consequences of the technology he developed in Human Use. This work was the foundation of cybernetics, an influential school of thought in the 20th century. Terrell Bynum, in his Stanford Encyclopedia of Philosophy article on “Computer and Information Ethics“, attributes to Wiener’s cybernetics the foundation of all future computer ethics. (I think that the threads go back earlier, at least through to Heidegger’s Question Concerning Technology.) It is hard to find a straight answer to the question of what happened to cybernetics?. By some reports, the artificial intelligence community cut their NSF funding in the 60’s.

Horkheimer is one of the major thinkers of the very influential Frankfurt School, the postwar social theorists at the core of intellectual critical theory. Of the Frankfurt School, perhaps the most famous in the United States is Adorno. Adorno is also the most caustic and depressed, and unfortunately much of popular critical theory now takes on his character. Horkheimer is more level-headed. Eclipse of Reason is an argument about the ways that philosophical empiricism and pragmatism became complicit in fascism. Here is an interested quotation.

It is very interesting to read them side by side. Published only a few years apart, Wiener and Horkheimer are giants of two very different intellectual traditions. There’s little reason to expect they ever communicated (a more thorough historian would know more). But each makes sweeping claims about society, language, and technology and contextualizes them in broader intellectual awareness of religion, history and science.

Horkheimer writes about how the collapse of the Enlightment project of objective reason has opened the way for a society ruled by subjective reason, which he characterizes as the reason of formal mathematics and scientific thinking that is neutral to its content. It is instrumental thinking in its purest, most rigorous form. His descriptions of it sound like gestures to what we today call “data science”–a set of mechanical techniques that we can use to analyze and classify anything, perfecting our understanding of technical probabilities towards whatever ends one likes.

I find this a more powerful critique of data science than recent paranoia about “algorithms”. It is frustrating to read something over sixty years old that covers the same ground as we are going over again today but with more composure. Mathematized reasoning about the world is an early 20th century phenomenon and automated computation a mid-20th century phenomenon. The disparities in power that result from the deployment of these tools were thoroughly discussed at the time.

But today, at least in my own intellectual climate, it’s common to hear a mention of “logic” with the rebuttal “whose logic?“. Multiculturalism and standpoint epistemology, profoundly important for sensitizing researchers to bias, are taken to an extreme the glorifies technical ignorance. If the foundation of knowledge is in ones lived experience, as these ideologies purport, and one does not understand the technical logic used so effectively by dominant identity groups, then one can dismiss technical logic as merely a cultural logic of an opposing identity group. I experience the technically competent person as the Other and cannot perceive their actions as skill but only as power and in particular power over me. Because my lived experience is my surest guide, what I experience must be so!

It is simply tragic that the education system has promoted this kind of thinking so much that it pervades even mainstream journalism. This is tragic for reasons I’ve expressed in “objectivity is powerful“. One solution is to provide more accessible accounts of the lived experience of technicality through qualitative reporting, which I have attempted in “technical work“.

But the real problem is that the kind of formal logic that is at the foundation of modern scientific thought, including its most recent manifestation ‘data science’, is at its heart perfectly abstract and so cannot be captured by accounts of observed practices or lived experience. It is reason or thought. Is it disembodied? Not exactly. But at least according to constructivist accounts of mathematical knowledge, which occupy a fortunate dialectical position in this debate, mathematical insight is built from embodied phenomenological primitives but by their psychological construction are abstract. This process makes it possible for people to learn abstract principles such as the mathematical theory of information on which so much of the contemporary telecommunications and artificial intelligence apparatus depends. These are the abstract principles with which the mathematician Norbert Wiener was so intimately familiar.

Horkheimer, pragmatism, and cognitive ecology

In Eclipse of Reason, Horkheimer rips into the American pragmatists Peirce, James, and Dewey like nobody I’ve ever read. Normally seen as reasonable and benign, Horkheimer paints these figures as ignorant and undermining of the whole social order.

The reason is that he believes that they reduce epistemology to a kind a instrumentalism. But that’s selling their position a bit short. Dewey’s moral epistemology is pragmatist in that it is driven by particular, situated interests and concerns, but these are ingredients to moral inquiry and not conclusions in themselves.

So to the extent that Horkheimer is looking to dialectic reason as the grounds to uncovering objective truths, Dewey’s emphasis on the establishing institutions that allow for meaningful moral inquiry seems consistent with Horkheimer’s view. The difference is in whether the dialectics are trancendental (as for Kant) or immanent (as for Hegel?).

The tension around objectivity in epistemology that comes up in the present academic environment is that all claims to objectivity are necessarily situated and this situatedness is raised as a challenge to their objective status. If the claims or their justification depend on conditions that exclude some subjects (as they no doubt do; whether or not dialectical reason is transcendental or immanent is requires opportunities for reflection that are rare–privileged), can these conclusions be said to be true for all subjects?

The Friendly AI research program more or less assumes that yes, this is the case. Yudkowsky’s notion of Coherent Extrapolated Volition–the position arrived at by simulated, idealized reasoners, is a 21st century remake of Peirce’s limiting consensus of the rational. And yet the cry from standpoint theorists and certain anthropologically inspired disciplines is a recognition of the validity of partial perspectives. Haraway, for example, calls for an alliance of partial perspectives. Critical and adversarial design folks appear to have picked up this baton. Their vision is of a future of constantly vying (“agonistic”) partiality, with no perspective presuming to be settled, objective or complete.

If we make cognitivist assumptions about the computationality of all epistemic agents, then we are forced to acknowledge the finiteness of all actually existing reasoning. Finite capacity and situatedness become two sides of the same coin. Partiality, then, becomes a function of both ones place in the network (eccentricity vs. centrality) as well as capacity to integrate information from the periphery. Those locations in the network most able to valuably integrate information, whether they be Google’s data centers or the conversational hubs of research universities, are more impartial, more objective. But they can never be the complete system. Because of their finite capacity, their representations can at best be lossy compressions of the whole.

Horkheimer dreams of an objective truth obtainable by a single subject through transcendental dialectic. Perhaps he thinks this is unattainable today (I have to read on). But if there’s hope in this vision, it seems to me it must come from one of two possibilities:

  • The fortuitously fractal structure of the sociotechnical world such that an adequate representation of it can be maintained in its epistemic hubs through quining, or
  • A generative grammar or modeling language of cognitive ecology such that we can get insights into the larger interactive system from toy models, and apply these simplified models pragmatically in specific cases. For this to work and not suffer the same failures as theoretical economics, these models need to have empirical content. Something like Wolpert, Lee, and Bono’s Predictive Game Theory (for which I just discovered they’ve released a Python package…cool!) may be critical here.

Eclipse of Reason

I’m starting to read Max Horkheimer’s Eclipse of Reason. I have had high hopes for it and have not been disappointed.

The distinction Horkheimer draws in the first section, “Means and Ends”, is between subjective reason and objective reason.

Subjective reason is the kind of reasoning that is used to most efficiently achieve ones goals, whatever they are. Writing even as early as 1947, Horkheimer notes that subjective reason has become formalized and reduced to the computation of technical probabilities. He is referring to the formalization of logic in the Anglophone tradition by Russell and Whitehead and its use in early computer science, most likely. (See Imre Lakatos and programming as dialectic for more background on this, as well as resonant material on where this is going)

Objective reason is, within a simple “means/ends” binary, most simply described as the reasoning of ends. I am not very far through the book and Horkheimer is so far unspecific about what this entails in practice but instead articulates it as an idea that has fallen out of use. He associates it with Platonic forms. With logos–a word that becomes especially charged for me around Christmas and whose religious connotations are certainly intertwined with the idea of objectivity. Since it is objective and not bound to a particular subject, the rationality of correct ends is the rationality of the whole world or universe, it’s proper ordering or harmony. Humanity’s understanding of it is not a technical accomplishment so much an achievement of revelation or wisdom achieved–and I think this is Horkheimer’s Hegelian/Marxist twist–dialectically.

Horkheimer in 1947 believes that subjective reason, and specifically its formalization, have undermined objective reason by exposing its mythological origins. While we have countless traditions still based in old ideologies that give us shared values and norms simply out of habit, they have been exposed as superstition. And so while our ability to achieve our goals has been amplified, our ability to have goals with intellectual integrity has hollowed out. This is a crisis.

One reason this is a crisis is because (to paraphrase) the functions once performed by objectivity or authoritarian religion or metaphysics are now taken on by the reifying apparatus of the market. This is a Marxist critique that is apropos today.

It is not hard to see that Horkheimer’s critique of “formalized subjective reason” extends to the wide use of computational statistics or “data science” in the vast ways it is now. Moreover, it’s easy to see how the “Internet of Things” and everything else instrumented–the Facebook user interface, this blog post, everything else–participates in this reifying market apparatus. Every critique of the Internet and the data economy from the past five years has just been a reiteration of Horkheimer, whose warning came loud and clear in the 40’s.

Moreover, the anxieties of the “apocalyptic libertarians” of Sam Franks article, the Less Wrong theorists of friendly and unfriendly Artificial intelligence, are straight out of the old books of the Frankfurt School. Ironically, todays “rationalists” have no awareness of the broader history of rationality. Rather, their version of rationality begins with Von Neummann, and ends with two kinds of rationality, “epistemic rationality”, about determining correct beliefs, and “instrumental rationality”, about correctly reaching ones ends. Both are formal and subjective, in Horkheimer’s analysis; they don’t even have a word for ‘objective reason’, it has so far fallen away from their awareness of what is intellectually possible.

But the consequence is that this same community lives in fear of the unfriendly AI–a superintelligence driven by a “utility function” so inhuman that it creates a dystopia. Unarmed with the tools of Marxist criticism, they are unable to see the present economic system as precisely that inhuman superintelligence, a monster bricolage of formally reasoning market apparati.

For Horkheimer (and I’m talking out of my butt a little here because I haven’t read enough of the book to really know; I’m going on some context I’ve read up on early) the formalization and automation of reason is part of the problem. Having a computer think for you is very different from actually thinking. The latter is psychologically transformative in ways that the former is not. It is hard for me to tell whether Horkheimer would prefer things to go back the way they were, or if he thinks that we must resign ourselves to a bleak inhuman future, or what.

My own view, which I am worried is deeply quixotic, is that a formalization of objective reason would allow us to achieve its conditions faster. You could say I’m a logos-accelerationist. However, if the way to achieve objective reason is dialectically, then this requires a mathematical formalization of dialectic. That’s shooting the moon.

This is not entirely unlike the goals and position of MIRI in a number of ways except that I think I’ve got some deep intellectual disagreements about their formulation of the problem.

Reflecting on “Technoscience and Expressionism” by @FractalOntology

I’ve come across Joseph Weissman’s (@FractalOntology) “Technoscience and Expressionism” and am grateful for it, as its filled me in on a philosophical position that I missed the first time around, accelerationism. I’m not a Deleuzian and prefer my analytic texts to plod, so I can’t say I understood all of the essay. On the other hand, I gather the angle of this kind of philosophizing is intentionally psychotherapeutic and hence serves and artistic/literary function rather than one that explicitly guides praxis.

I am curious about the essay because I would like to see a thorough analysis of the political possibilities for the 21st century that gets past 20th century tropes. The passions of journalistic and intellectual debate have an atavistic tendency due to a lack of imagination that I would like to avoid in my own life and work.

Accelerationism looks new. It was pronounced in a manifesto, which is a good start.

Here is a quote from it:

Democracy cannot be defined simply by its means — not via voting, discussion, or general assemblies. Real democracy must be defined by its goal — collective self-​mastery. This is a project which must align politics with the legacy of the Enlightenment, to the extent that it is only through harnessing our ability to understand ourselves and our world better (our social, technical, economic, psychological world) that we can come to rule ourselves. We need to posit a collectively controlled legitimate vertical authority in addition to distributed horizontal forms of sociality, to avoid becoming the slaves of either a tyrannical totalitarian centralism or a capricious emergent order beyond our control. The command of The Plan must be married to the improvised order of The Network.

Hell yeah, the Enlightenment! Sign me up!

The manifesto calls for an end to the left’s emphasis on local action, transparency, and direct democracy. Rather, it calls for a muscular hegemonic left that fully employs and deploys “technoscience”.

It is good to be able to name this political tendency and distinguish it from other left tendencies. It is also good to distinguish it from “right accelerationism”, which Weissman identifies with billionaires who want to create exurb communities.

A left-accelerationist impulse is today playing out dramatically against a right-accelerationist one. And the right-accelerationists are about as dangerous as you may imagine. With silicon valley VCs, and libertarian technologists more generally reading Nick Land on geopolitical fragmentation, the reception or at least receptivity to hard-right accelerants seems problematically open (and the recent $2M campaign proposing the segmentation of California into six microstates seems to provide some evidence for this.) Billionaires consuming hard-right accelerationist materials arguing for hyper-secessionism undoubtedly amounts to a critically dangerous situation. I suspect that the right-accelerationist materials, perspectives, affect, energy expresses a similar shadow, if it is not partly what is catalyzing the resurgence of micro-fascisms elsewhere (and macro ones as well — perhaps most significant to my mind here is the overlap of right-acceleration with white nationalism, and more generally what is deplorably and disingenuously called “race realism” — and is of course simply racism; consider Marine le Pen’s fascist front, which recently won 25% of the seats in the French parliament, UKIP’s resurgence in Great Britain; while we may not hear accelerationist allegiances and watchwords explicitly, the political implications and continuity is at the very least somewhat unsettling…)

There is an unfortunate conflation of several different points of view here. It is too easy to associate racism, wealth, and libertarianism as these are the nightmares of the left’s political imagination. If ideological writing is therapeutic, a way of articulating ones dreams, then this is entirely appropriate with a caveat. The caveat being that every nightmare is a creation of ones own psychology more so than a reflection of the real world.

The same elisions are made by Sam Frank in his recent article thematizing Silicon Valley libertarianism, friendly artificial intelligence research, and contemporary rationalism as a self-help technique. There are interesting organizational ties between these institutions that are validly worth investigating but it would be lazy to collapse vast swathes of the intellectual spectrum into binaries.

In March 2013 I wrote about the Bay Area Rationalists:

There is a good story here, somewhere. If I were a journalist, I would get in on this and publish something about it, just because there is such a great opportunity for sensationalist exploitation.

I would like to say “I called it”–Sam Frank has recently written just such a sensationalist, exploitative piece in Harper’s Magazine. It is thoroughly enjoyable and I wouldn’t say it’s inaccurate. But I don’t think this is the best way to get to know these people. A better one is to attend a CFAR workshop. It used to be that you could avoid the fee with a promise to volunteer, but that there was a money-back guarantee which extended to ones promise to volunteer. If that’s still the case, then one can essentially attend for free.

Another way to engage this community intellectually, which I would encourage the left accelerationists to do because it’s interesting, is to start participating on LessWrong. For some reason this community is not subject to ideological raids like so many other community platforms. I think it could stand for an influx of Deleuze.

Ultimately the left/right divide comes down to a question of distribution of resources and/or surplus. Left accelerationist tactics appear from here to be a more viable way of seizing resources than direct democracy. However, the question is whether accelerationist tactics inevitably result in inequalities that create control structures of the kind originally objected to. In other words, this may simply be politics as usual and nothing radical at all.

So there’s an intersection between these considerations (accelerationist vs. … decelerationism? Capital accumulation vs. capital redistribution?) and the question of decentralization of decision-making process (is that the managerialism vs. multistakeholderism divide?) whose logic is unclear to me. I want to know which affinities are necessary and which are merely contingent.

Discourse theory of law from Habermas

There has been at least one major gap in my understanding of Habermas’s social theory which I’m just filling now. The position Habermas reaches towards the end of Theory of Communicative Action vol 2 and develops further in later work in Between Facts and Norms (1992) is the discourse theory of law.

What I think went on is that Habermas eventually gave up on deliberative democracy in its purest form. After a career of scholarship about the public sphere, the ideal speech situation, and communicative action–fully developing the lifeworld as the ground for legitimate norms–but eventually had to make a concession to “the steering media” of money and power as necessary for the organization of society at scale. But at the intersection between lifeworld and system is law. Lawserves as a transmission belt between legitimate norms established by civil society and “system”; at it’s best it is both efficacious and legitimate.

Law is ambiguous; it can serve both legitimate citizen interests united in communicative solidarity. It can also serve strong powerful interests. But it’s where the action is, because it’s where Habermas sees the ability for lifeworld to counter-steer the whole political apparatus towards legitimacy, including shifting the balance of power between lifeworld and system.

This is interesting because:

  • Habermas is like the last living heir of the Frankfurt School mission and this is a mature and actionable view nevertheless founded in the Critical Theory tradition.
  • If you pair it with Lessig’s Code is Law thesis, you get a framework for thinking about how technical mediation of civil society can be legitimate but also efficacious. I.e., code can be legitimized discoursively through communicative action. Arguably, this is how a lot of open source communities work, as well as standards bodies.
  • Thinking about managerialism as a system of centralized power that provides a framework of freedoms within it, Habermas seems to be presenting an alternative model where law or code evolves with the direct input of civil stakeholders. I’m fascinated by where Nick Doty’s work on multistakeholderism in the W3C is going and think there’s an alternative model in there somewhere. There’s a deep consistency in this, noted a while ago (2003) by Froomkin but largely unacknowledged as far as I can tell in the Data and Society or Berkman worlds.

I don’t see in Habermas anything about funding the state. That would mean acknowledging military force and the power to tax. But this is progress for me.

References

Zurn, Christopher. “Discourse theory of law”, in Jurgen Habermas: Key Concepts, edited by Barbara Fultner

Follow

Get every new post delivered to your Inbox.

Join 1,027 other followers