Digifesto

a new kind of scientism

Thinking it over, there are a number of problems with my last post. One was the claim that the scientism addressed by Horkheimer in 1947 is the same as the scientism of today.

Scientism is a pejorative term for the belief that science defines reality and/or is a solution to all problems. It’s not in common use now, but maybe it should be among the critical thinkers of today.

Frankfurt School thinkers like Horkheimer and Habermas used “scientism” to criticize the positivists, the 20th century philosophical school that sought to reduce all science and epistemology to formal empirical methods, and to reduce all phenomena, including social phenomena, to empirical science modeled on physics.

Lots of people find this idea offensive for one reason or another. I’d argue that it’s a lot like the idea that algorithms can capture all of social reality or perform the work of scientists. In some sense, “data science” is a contemporary positivism, and the use of “algorithms” to mediate social reality depends on a positivist epistemology.

I don’t know any computer scientists that believe in the omnipotence of algorithms. I did get an invitation to this event at UC Berkeley the other day, though:

This Saturday, at [redacted], we will celebrate the first 8 years of the [redacted].

Current students, recent grads from Berkeley and Stanford, and a group of entrepreneurs from Taiwan will get together with members of the Social Data Lab. Speakers include [redacted], former Palantir financial products lead and course assistant of the [redacted]. He will reflect on how data has been driving transforming innovation. There will be break-out sessions on sign flips, on predictions for 2020, and on why big data is the new religion, and what data scientists need to learn to become the new high priests. [emphasis mine]

I suppose you could call that scientistic rhetoric, though honestly it’s so preposterous I don’t know what to think.

Though I would recommend to the critical set the term “scientism”, I’m ambivalent about whether it’s appropriate to call the contemporary emphasis on algorithms scientistic for the following reason: it might be that ‘data science’ processes are better than the procedures developed for the advancement of physics in the mid-20th century because they stand on sixty years of foundational mathematical work with modeling cognition as an important aim. Recall that the AI research program didn’t start until Chomsky took down Skinner. Horkheimer quotes Dewey commenting that until naturalist researchers were able to use their methods to understand cognition, they wouldn’t be able to develop (this is my paraphrase:) a totalizing system. But the foundational mathematics of information theory, Bayesian statistics, etc. are robust enough or could be robust enough to simply be universally intersubjectively valid. That would mean data science would stand on transcendental not socially contingent grounds.

That would open up a whole host of problems that take us even further back than Horkheimer to early modern philosophers like Kant. I don’t want to go there right now. There’s still plenty to work with in Horkheimer, and in “Conflicting panaceas” he points to one of the critical problems, which is how to reconcile lived reality in its contingency with the formal requirements of positivist or, in the contemporary data scientific case, algorithmic epistemology.

“Conflicting panaceas”; decapitation and dogmatism in cultural studies counterpublics

I’m still reading through Horkheimer’s Eclipse of Reason. It is dense writing and slow going. I’m in the middle of the second chapter, “Conflicting Panaceas”.

This chapter recognizes and then critiques a variety of intellectual stances of his contemporaries. Whereas in the first chapter Horkheimer takes aim at pragmatism, in this he concerns himself with neo-Thomism and positivism.

Neo-Thomism? Yes, that’s right. Apparently in 1947 one of the major intellectual contenders was a school of thought based on adapting the metaphysics of Saint Thomas Aquinas to modern times. This school of thought was apparently notable enough that while Horkheimer is generally happy to call out the proponents of pragmatism and positivism by name and call them business interest lapdogs, he chooses instead to address the neo-Thomists anonymously in a conciliatory footnote

This important metaphysical school includes some of the most responsible historians and writers of our day. The critical remarks here bear exclusively on the trend by which independent philosophical thought is being superseded by dogmatism.

In a nutshell, Horkheimer’s criticism of neo-Thomism is that it is that since it tries and fails to repurpose old ontologies to the new world, it can’t fulfill its own ambitions as an intellectual system through rigor without losing the theological ambitions that motivate it, the identification of goodness, power, and eternal law. Since it can’t intellectually culminate, it becomes a “dogmatism” that can be coopted disingenuously by social forces.

This is, as I understand it, the essence of Horkheimer’s criticism of everything: That for any intellectual trend or project, unless the philosophical project is allowed to continue to completion within it, it will have its brains slurped out and become zombified by an instrumentalist capitalism that threatens to devolve into devastating world war. Hence, just as neo-Thomism becomes a dogmatism because it would refute itself if it allowed its logic to proceed to completion, so too does positivism become a dogmatism when it identifies the truth with disciplinarily enforced scientific methods. Since, as Horkheimer points out in 1947, these scientific methods are social processes, this dogmatic positivism is another zombie, prone to fads and politics not tracking truth.

I’ve been struggling over the past year or so with similar anxieties about what from my vantage point are prevailing intellectual trends of 2014. Perversely, in my experience the new intellectual identities that emerged to expose scientific procedures as social processes in the 20th century (STS) and establish rhetorics of resistance (cultural studies) have been similarly decapitated, recuperated, and dogmatic. [see 1 2 3].

Are these the hauntings of straw men? This is possible. Perhaps the intellectual currents I’ve witnessed are informal expressions, not serious intellectual work. But I think there is a deeper undercurrent which has turned up as I’ve worked on a paper resulting from this conversation about publics. It hinges on the interpretation of an influential article by Fraser in which she contests Habermas’s notion of the public sphere.

In my reading, Fraser more or less maintains the ideal of the public sphere as a place of legitimacy and reconciliation. For her it is notably inequitable, it is plural not singular, the boundaries of what is public and private are in constant negotiation, etc. But its function is roughly the same as it is for Habermas.

My growing suspicion is that this is not how Fraser is used by cultural studies today. This suspicion began when Fraser was introduced to me; upon reading her work I did not find the objection implicit in the reference to her. It continued as I worked with the comments of a reviewer on a paper. It was recently confirmed while reading Chris Wisniewski’s “Digital Deliberation ?” in Critical Review, vol 25, no. 2, 2013. He writes well:

The cultural-studies scholars and critical theorists interested in diversifying participation through the Internet have made a turn away from this deliberative ideal. In an essay first published in 1990, the critical theorist Nancy Fraser (1999, 521) rejects the idealized model of bourgeois public sphere as defined by Habermas on the grounds that it is exclusionary by design. Because the bourgeois public sphere brackets hierarchies of gender, race, ethnicity, class, etc., Fraser argues, it benefits the interests of dominant groups by default through its elision of socially significant inequalities. Lacking the ability to participate in the dominant discourse, disadvantaged groups establish alternative “subaltern counterpublics”.

Since the ideal speech situation does not acknowledge the socially significant inequalities that generate these counterpublics, Fraser argues for a different goal: a model of participatory democracy in which intercultural communications across socially stratified groups occur in forums that do not elide differences but intead allow diverse multiple publics the opportunity to determine the concerns or good of the public as a whole through “discursive contestations.” Fraser approaches thes subgroups as identity publics and aruges that culture and political debate are essentially power struggles among self-interested subgroups. Fraser’s ideas are similar to those prevalent in cultural studies (see Wisneiwski 2007 and 2010), a relatively young discipline in which her work has been influential.

Fraser’s theoretical model is inconsistent with studies of democratic voting behavior, which indicate that people tend to vote sociotropically, according to a perceived collective interest, and not in facor of their own perceived self-interest (e.g., Kinder and Kiewiet 1981). The argument that so-called “mass” culture excludes the interests of dominated groups in favor of the interests of the elites loses some of its valence if culture is not a site through which self-interested groups vie for their objective interests, but is rather a forum in which democratic citizens debate what constitutes, and the best way to achieve, the collective good. Diversification of discourse ceases to be an end in itself.”

I think Wisneiwski hits the nail on the head here, a nail I’d like to drive in farther. If culture is conceived of as consisting of the contests of self-interested identity groups, as this version of cultural studies does, then it will necessarily see itself as one of many self-interested identities. Cultural studies becomes, by its own logic, a counterpublic that exists primarily to advance its own interests.

But just like neo-Thomism, this positioning decapitates cultural studies by preventing it from intellectually confronting its own limitations. No identity can survive rigorous intellectual interrogation, because all identities are based on contingency, finitude, trauma. Cultural studies adopt and repurpose historical rhetorics of liberation much like neo-Thomists adopted and repurposed historical metaphysics of Christianity. The obsolescence of these rhetorics, like the obsolescence of Thomistic metaphysics, is what makes them dangerous. The rhetoric that maintains its own subordination as a condition of its own identity can never truly liberate, it can only antagonize. Unable to intellectually realize its own purpose, it becomes purposeless and hence coopted and recuperated like other dogmatisms. In particular, it feeds into “the politicization of absolutely everything”, in the language of Ezra Klein’s spot-on analysis of GamerGate. Cultural studies is a powerful ideology because it turns culture into a field of perpetual rivalry with all the distracting drama of reality television. In so doing, it undermines deeper intellectual penetration into the structural conditions of society.

If cultural studies is the neo-Thomism of today, a dogmatist religious revival of the profound theology of the civil rights movement, perhaps it’s the theocratic invocation of ‘algorithms’ that is the new scientism. I would have more to say about it if it weren’t so similar to the old scientism.

The solution to Secular Stagnation is more gigantic stone monuments

Because I am very opinionated, I know what we should do about secular stagnation.

Secular stagnation is what economists are calling the problem of an economy that is growing incorrigibly slowly due to insufficient demand–low demand caused in part by high inequality. A consequence of this is that for the economy to maintain high levels of employment, real interest rates need to be negative. That is bad for people who have a lot of money and nothing to do with it. What, they must ask themselves in their sleepless nights, can we do with all this extra money, if not save it and earn interest?

History provides an answer for them. The great empires of the past that have had more money than they knew what to do with and lots of otherwise unemployed people built gigantic stone monuments. The Pyramids of Egypt. Angor Wat in Cambodia. Easter Island. Machu Pichu.

The great wonders of the world were all, in retrospect, enormous wastes of time and money. They also created full employment and will be considered amazing forever.

Chances like this do not come often in history.

Know-how is not interpretable so algorithms are not interpretable

I happened upon Hildreth and Kimble’s “The duality of knowledge” (2002) earlier this morning while writing this and have found it thought-provoking through to lunch.

What’s interesting is that it is (a) 12 years old, (b) a rather straightforward analysis of information technology, expert systems, ‘knowledge management’, etc. in light of solid post-Enlightenment thinking about the nature of knowledge, and (c) an anticipation of the problems of ‘interpretability’ that were a couple months ago at least an active topic of academic discussion. Or so I hear.

This is the paper’s abstract:

Knowledge Management (KM) is a field that has attracted much attention both in academic and practitioner circles. Most KM projects appear to be primarily concerned with knowledge that can be quantified and can be captured, codified and stored – an approach more deserving of the label Information Management.

Recently there has been recognition that some knowledge cannot be quantified and cannot be captured, codified or stored. However, the predominant approach to the management of this knowledge remains to try to convert it to a form that can be handled using the ‘traditional’ approach.

In this paper, we argue that this approach is flawed and some knowledge simply cannot be captured. A method is needed which recognises that knowledge resides in people: not in machines or documents. We will argue that KM is essentially about people and the earlier technology driven approaches, which failed to consider this, were bound to be limited in their success. One possible way forward is offered by Communities of Practice, which provide an environment for people to develop knowledge through interaction with others in an environment where knowledge is created nurtured and sustained.

The authors point out that Knowledge Management (KM) is an extension of the earlier program of Artificiali Intelligence, depends on a model of knowledge that maintains that knowledge can be explicitly represented and hence stored and transfered, and propose an alternative way of thinking about things based on the Communities of Practice framework.

A lot of their analysis is about the failures of “expert systems”, which is a term that has fallen out of use but means basically the same thing as the contemporary uncomputational scholarly use of ‘algorithm’. An expert system was a computer program designed to make decisions about things. Broadly speaking, a search engine is a kind of expert system. What’s changed are the particular techniques and algorithms that such systems employ, and their relationship with computing and sensing hardware.

Here’s what Hildreth and Kimble have to say about expert systems in 2002:

Viewing knowledge as a duality can help to explain the failure of some KM initiatives. When the harder aspects are abstracted in isolation the representation is incomplete: the softer aspects of knowledge must also be taken into account. Hargadon (1998) gives the example of a server holding past projects, but developers do not look there for solutions. As they put it, ‘the important knowledge is all in people’s heads’, that is the solutions on the server only represent the harder aspects of the knowledge. For a complete picture, the softer aspects are also necessary. Similarly, the expert systems of the 1980s can be seen as failing because they concentrated solely on the harder aspects of knowledge. Ignoring the softer aspects meant the picture was incomplete and the system could not be moved from the environment in which it was developed.

However, even knowledge that is ‘in people’s heads’ is not sufficient – the interactive aspect of Cook and Seely Brown’s (1999) ‘knowing’ must also be taken into account. This is one of the key aspects to the management of the softer side to knowledge.

In 2002, this kind of argument was seen as a valuable critique of artificial intelligence and the practices based on it as a paradigm. But already by 2002 this paradigm was falling away. Statistical computing, reinforcement learning, decision tree bagging, etc. were already in use at this time. These methods are “softer” in that they don’t require the “hard” concrete representations of the earlier artificial intelligence program, which I believe by that time was already refered to as “Good Old Fashioned AI” or GOFAI by a number of practicioners.

(I should note–that’s a term I learned while studying AI as an undergraduate in 2005.)

So throughout the 90’s and the 00’s, if not earlier, ‘AI’ transformed into ‘machine learning’ and become the implementation of ‘soft’ forms of knowledge. These systems are built to learn to perform a task optimally based flexibly on feedback from past performance. They are in fact the cybernetic systems imagined by Norbert Wiener.

Perplexing, then, is the contemporary problem that the models created by these machine learning algorithms are opaque to their creators. These models were created using techniques that were designed precisely to solve the problems that systems based on explicit, communicable knowledge were meant to solve.

If you accept the thesis that contemporary ‘algorithms’-driven systems are well-designed implementations of ‘soft’ knowledge systems, then you get some interesting conclusions.

First, forget about interpeting the learned models of these systems and testing them for things like social discrimination, which is apparently in vogue. The right place to focus attention is on the function being optimized. All these feedback-based systems–whether they be based on evolutionary algorithms, or convergence on local maxima, or reinforcement learning, or whatever–are designed to optimize some goal function. That goal function is the closest thing you will get to an explicit representation of the purpose of the algorithm. It may change over time, but it should be coded there explicitly.

Interestingly, this is exactly the sense of ‘purpose’ that Wiener proposed could be applied to physical systems in his landmark essay, published with Rosenbleuth and Bigelow, “Purpose, Behavior, and Teleology.” In 1943. Sly devil.

EDIT: An excellent analysis of how fairness can be represented as an explicit goal function can be found in Dwork et al. 2011.

Second, because what the algorithms is designed to optimize is generally going to be something like ‘maximize ad revenue’ and not anything particularly explicitly pernicious like ‘screw over the disadvantaged people’, this line of inquiry will raise some interesting questions about, for example, the relationship between capitalism and social justice. By “raise some interesting questions”, I mean, “reveal some uncomfortable truths everyone is already aware of”. Once it becomes clear that the whole discussion of “algorithms” and their inscrutability is just a way of talking about societal problems and entrenched political interests without talking about it, it will probably be tabled due to its political infeasibility.

That is (and I guess this is the third point) unless somebody can figure out how to explicitly define the social justice goals of the activists/advocates into a goal function that could be implemented by one of these soft-touch expert systems. That would be rad. Whether anybody would be interested in using or investing in such a system is an important open question. Not a wide open question–the answer is probably “Not really”–but just open enough to let some air onto the embers of my idealism.

Horkheimer and Wiener

[I began writing this weeks ago and never finished it. I’m posting it here in its unfinished form just because.]

I think I may be condemning myself to irrelevance by reading so many books. But as I make an effort to read up on the foundational literature of today’s major intellectual traditions, I can’t help but be impressed by the richness of their insight. Something has been lost.

I’m currently reading Norbert Wiener’s The Human Use of Human Beings (1950) and Max Horkheimer’s Eclipse of Reason (1947). The former I am reading for the Berkeley School of Information Classics reading group. Norbert Wiener was one of the foundational mathematicians of 20th century information technology, a colleague of Claude Shannon. Out of his own sense of social responsibility, he articulated his predictions for the consequences of the technology he developed in Human Use. This work was the foundation of cybernetics, an influential school of thought in the 20th century. Terrell Bynum, in his Stanford Encyclopedia of Philosophy article on “Computer and Information Ethics“, attributes to Wiener’s cybernetics the foundation of all future computer ethics. (I think that the threads go back earlier, at least through to Heidegger’s Question Concerning Technology.) It is hard to find a straight answer to the question of what happened to cybernetics?. By some reports, the artificial intelligence community cut their NSF funding in the 60’s.

Horkheimer is one of the major thinkers of the very influential Frankfurt School, the postwar social theorists at the core of intellectual critical theory. Of the Frankfurt School, perhaps the most famous in the United States is Adorno. Adorno is also the most caustic and depressed, and unfortunately much of popular critical theory now takes on his character. Horkheimer is more level-headed. Eclipse of Reason is an argument about the ways that philosophical empiricism and pragmatism became complicit in fascism. Here is an interested quotation.

It is very interesting to read them side by side. Published only a few years apart, Wiener and Horkheimer are giants of two very different intellectual traditions. There’s little reason to expect they ever communicated (a more thorough historian would know more). But each makes sweeping claims about society, language, and technology and contextualizes them in broader intellectual awareness of religion, history and science.

Horkheimer writes about how the collapse of the Enlightment project of objective reason has opened the way for a society ruled by subjective reason, which he characterizes as the reason of formal mathematics and scientific thinking that is neutral to its content. It is instrumental thinking in its purest, most rigorous form. His descriptions of it sound like gestures to what we today call “data science”–a set of mechanical techniques that we can use to analyze and classify anything, perfecting our understanding of technical probabilities towards whatever ends one likes.

I find this a more powerful critique of data science than recent paranoia about “algorithms”. It is frustrating to read something over sixty years old that covers the same ground as we are going over again today but with more composure. Mathematized reasoning about the world is an early 20th century phenomenon and automated computation a mid-20th century phenomenon. The disparities in power that result from the deployment of these tools were thoroughly discussed at the time.

But today, at least in my own intellectual climate, it’s common to hear a mention of “logic” with the rebuttal “whose logic?“. Multiculturalism and standpoint epistemology, profoundly important for sensitizing researchers to bias, are taken to an extreme the glorifies technical ignorance. If the foundation of knowledge is in ones lived experience, as these ideologies purport, and one does not understand the technical logic used so effectively by dominant identity groups, then one can dismiss technical logic as merely a cultural logic of an opposing identity group. I experience the technically competent person as the Other and cannot perceive their actions as skill but only as power and in particular power over me. Because my lived experience is my surest guide, what I experience must be so!

It is simply tragic that the education system has promoted this kind of thinking so much that it pervades even mainstream journalism. This is tragic for reasons I’ve expressed in “objectivity is powerful“. One solution is to provide more accessible accounts of the lived experience of technicality through qualitative reporting, which I have attempted in “technical work“.

But the real problem is that the kind of formal logic that is at the foundation of modern scientific thought, including its most recent manifestation ‘data science’, is at its heart perfectly abstract and so cannot be captured by accounts of observed practices or lived experience. It is reason or thought. Is it disembodied? Not exactly. But at least according to constructivist accounts of mathematical knowledge, which occupy a fortunate dialectical position in this debate, mathematical insight is built from embodied phenomenological primitives but by their psychological construction are abstract. This process makes it possible for people to learn abstract principles such as the mathematical theory of information on which so much of the contemporary telecommunications and artificial intelligence apparatus depends. These are the abstract principles with which the mathematician Norbert Wiener was so intimately familiar.

Privacy, trust, context, and legitimate peripheral participation

Privacy is important. For Nissenbaum, what’s essential to privacy is control over context. But what is context?

Using Luhmann’s framework of social systems–ignoring for a moment e.g. Habermas’ criticism and accepting the naturalized, systems theoretic understanding of society–we would have to see a context as a subsystem of the total social system. In so far as the social system is constituted by many acts of communication–let’s visualize this as a network of agents, whose edges are acts of communication–then a context is something preserved by configurations of agents and the way they interact.

Some of the forces that shape a social system will be exogenous. A river dividing two cities or, more abstractly, distance. In the digital domain, the barriers of interoperability between one virtual community infrastructure and another.

But others will be endogenous, formed from the social interactions themselves. An example is the gradual deepening of trust between agents based on a history of communication. Perhaps early conversations are formal, stilted. Later, an agent takes a risk, sharing something more personal–more private? It is reciprocated. Slowly, a trust bond, an evinced sharing of interests and mutual investment, becomes the foundation of cooperation. The Prisoner’s Dilemma is solved the old fashioned way.

Following Carey’s logic that communication as mere transmission when sustained over time becomes communication as ritual and the foundation of community, we can look at this slow process of trust formation as one of the ways that a context, in Nissenbaum’s sense, perhaps, forms. If Anne and Betsy have mutually internalized each others interests, then information flow between them will by and large support the interests of the pair, and Betsy will have low incentives to reveal private information in a way that would be detrimental to Anne.

Of course this is a huge oversimplification in lots of ways. One way is that it does not take into account the way the same agent may participant in many social roles or contexts. Communication is not a single edge from one agent to another in many circumstances. Perhaps the situation is better represented as a hypergraph. One reason why this whole domain may be so difficult to reason about is the sheer representational complexity of modeling the situation. It may require the kind of mathematical sophistication used by quantum physicists. Why not?

Not having that kind of insight into the problem yet, I will continue to sling what the social scientists call ‘theory’. Let’s talk about an exisiting community of practice, where the practice is a certain kind of communication. A community of scholars. A community of software developers. Weird Twitter. A backchannel mailing list coordinating a political campaign. A church.

According to Lave and Wenger, the way newcomers gradually become members and oldtimers of a community of practice is legitimate peripheral participation. This is consistent with the model described above characterizing the growth of trust through gradually deepening communication. Peripheral participation is low-risk. In an open source context, this might be as simple as writing a question to the mailing list or filing a bug report. Over time, the agent displays good faith and competence. (I’m disappointed to read just now that Wenger ultimately abandoned this model in favor of a theory of dualities. Is that a Hail Mary for empirical content for the theory? Also interested to follow links on this topic to a citation of von Krogh 1998, whose later work found its way onto my Open Collaboration and Peer Production syllabus. It’s a small world.

I’ve begun reading as I write this fascinating paper by Hildreth and Kimble 2002 and am now have lost my thread. Can I recover?)

Some questions:

  • Can this process of context-formation be characterized empirically through an analysis of e.g. the timing dynamics of communication (c.f. Thomas Maillart’s work)? If so, what does that tell us about the design of information systems for privacy?
  • What about illegitimate peripheral participation? Arguably, this blog is that kind of participation–it participates in a form of informal, unendorsed quasi-scholarship. It is a tool of context and disciplinary collapse. Is that a kind of violation of privacy? Why not?

Horkheimer, pragmatism, and cognitive ecology

In Eclipse of Reason, Horkheimer rips into the American pragmatists Peirce, James, and Dewey like nobody I’ve ever read. Normally seen as reasonable and benign, Horkheimer paints these figures as ignorant and undermining of the whole social order.

The reason is that he believes that they reduce epistemology to a kind a instrumentalism. But that’s selling their position a bit short. Dewey’s moral epistemology is pragmatist in that it is driven by particular, situated interests and concerns, but these are ingredients to moral inquiry and not conclusions in themselves.

So to the extent that Horkheimer is looking to dialectic reason as the grounds to uncovering objective truths, Dewey’s emphasis on the establishing institutions that allow for meaningful moral inquiry seems consistent with Horkheimer’s view. The difference is in whether the dialectics are trancendental (as for Kant) or immanent (as for Hegel?).

The tension around objectivity in epistemology that comes up in the present academic environment is that all claims to objectivity are necessarily situated and this situatedness is raised as a challenge to their objective status. If the claims or their justification depend on conditions that exclude some subjects (as they no doubt do; whether or not dialectical reason is transcendental or immanent is requires opportunities for reflection that are rare–privileged), can these conclusions be said to be true for all subjects?

The Friendly AI research program more or less assumes that yes, this is the case. Yudkowsky’s notion of Coherent Extrapolated Volition–the position arrived at by simulated, idealized reasoners, is a 21st century remake of Peirce’s limiting consensus of the rational. And yet the cry from standpoint theorists and certain anthropologically inspired disciplines is a recognition of the validity of partial perspectives. Haraway, for example, calls for an alliance of partial perspectives. Critical and adversarial design folks appear to have picked up this baton. Their vision is of a future of constantly vying (“agonistic”) partiality, with no perspective presuming to be settled, objective or complete.

If we make cognitivist assumptions about the computationality of all epistemic agents, then we are forced to acknowledge the finiteness of all actually existing reasoning. Finite capacity and situatedness become two sides of the same coin. Partiality, then, becomes a function of both ones place in the network (eccentricity vs. centrality) as well as capacity to integrate information from the periphery. Those locations in the network most able to valuably integrate information, whether they be Google’s data centers or the conversational hubs of research universities, are more impartial, more objective. But they can never be the complete system. Because of their finite capacity, their representations can at best be lossy compressions of the whole.

Horkheimer dreams of an objective truth obtainable by a single subject through transcendental dialectic. Perhaps he thinks this is unattainable today (I have to read on). But if there’s hope in this vision, it seems to me it must come from one of two possibilities:

  • The fortuitously fractal structure of the sociotechnical world such that an adequate representation of it can be maintained in its epistemic hubs through quining, or
  • A generative grammar or modeling language of cognitive ecology such that we can get insights into the larger interactive system from toy models, and apply these simplified models pragmatically in specific cases. For this to work and not suffer the same failures as theoretical economics, these models need to have empirical content. Something like Wolpert, Lee, and Bono’s Predictive Game Theory (for which I just discovered they’ve released a Python package…cool!) may be critical here.

Eclipse of Reason

I’m starting to read Max Horkheimer’s Eclipse of Reason. I have had high hopes for it and have not been disappointed.

The distinction Horkheimer draws in the first section, “Means and Ends”, is between subjective reason and objective reason.

Subjective reason is the kind of reasoning that is used to most efficiently achieve ones goals, whatever they are. Writing even as early as 1947, Horkheimer notes that subjective reason has become formalized and reduced to the computation of technical probabilities. He is referring to the formalization of logic in the Anglophone tradition by Russell and Whitehead and its use in early computer science, most likely. (See Imre Lakatos and programming as dialectic for more background on this, as well as resonant material on where this is going)

Objective reason is, within a simple “means/ends” binary, most simply described as the reasoning of ends. I am not very far through the book and Horkheimer is so far unspecific about what this entails in practice but instead articulates it as an idea that has fallen out of use. He associates it with Platonic forms. With logos–a word that becomes especially charged for me around Christmas and whose religious connotations are certainly intertwined with the idea of objectivity. Since it is objective and not bound to a particular subject, the rationality of correct ends is the rationality of the whole world or universe, it’s proper ordering or harmony. Humanity’s understanding of it is not a technical accomplishment so much an achievement of revelation or wisdom achieved–and I think this is Horkheimer’s Hegelian/Marxist twist–dialectically.

Horkheimer in 1947 believes that subjective reason, and specifically its formalization, have undermined objective reason by exposing its mythological origins. While we have countless traditions still based in old ideologies that give us shared values and norms simply out of habit, they have been exposed as superstition. And so while our ability to achieve our goals has been amplified, our ability to have goals with intellectual integrity has hollowed out. This is a crisis.

One reason this is a crisis is because (to paraphrase) the functions once performed by objectivity or authoritarian religion or metaphysics are now taken on by the reifying apparatus of the market. This is a Marxist critique that is apropos today.

It is not hard to see that Horkheimer’s critique of “formalized subjective reason” extends to the wide use of computational statistics or “data science” in the vast ways it is now. Moreover, it’s easy to see how the “Internet of Things” and everything else instrumented–the Facebook user interface, this blog post, everything else–participates in this reifying market apparatus. Every critique of the Internet and the data economy from the past five years has just been a reiteration of Horkheimer, whose warning came loud and clear in the 40’s.

Moreover, the anxieties of the “apocalyptic libertarians” of Sam Franks article, the Less Wrong theorists of friendly and unfriendly Artificial intelligence, are straight out of the old books of the Frankfurt School. Ironically, todays “rationalists” have no awareness of the broader history of rationality. Rather, their version of rationality begins with Von Neummann, and ends with two kinds of rationality, “epistemic rationality”, about determining correct beliefs, and “instrumental rationality”, about correctly reaching ones ends. Both are formal and subjective, in Horkheimer’s analysis; they don’t even have a word for ‘objective reason’, it has so far fallen away from their awareness of what is intellectually possible.

But the consequence is that this same community lives in fear of the unfriendly AI–a superintelligence driven by a “utility function” so inhuman that it creates a dystopia. Unarmed with the tools of Marxist criticism, they are unable to see the present economic system as precisely that inhuman superintelligence, a monster bricolage of formally reasoning market apparati.

For Horkheimer (and I’m talking out of my butt a little here because I haven’t read enough of the book to really know; I’m going on some context I’ve read up on early) the formalization and automation of reason is part of the problem. Having a computer think for you is very different from actually thinking. The latter is psychologically transformative in ways that the former is not. It is hard for me to tell whether Horkheimer would prefer things to go back the way they were, or if he thinks that we must resign ourselves to a bleak inhuman future, or what.

My own view, which I am worried is deeply quixotic, is that a formalization of objective reason would allow us to achieve its conditions faster. You could say I’m a logos-accelerationist. However, if the way to achieve objective reason is dialectically, then this requires a mathematical formalization of dialectic. That’s shooting the moon.

This is not entirely unlike the goals and position of MIRI in a number of ways except that I think I’ve got some deep intellectual disagreements about their formulation of the problem.

Reflecting on “Technoscience and Expressionism” by @FractalOntology

I’ve come across Joseph Weissman’s (@FractalOntology) “Technoscience and Expressionism” and am grateful for it, as its filled me in on a philosophical position that I missed the first time around, accelerationism. I’m not a Deleuzian and prefer my analytic texts to plod, so I can’t say I understood all of the essay. On the other hand, I gather the angle of this kind of philosophizing is intentionally psychotherapeutic and hence serves and artistic/literary function rather than one that explicitly guides praxis.

I am curious about the essay because I would like to see a thorough analysis of the political possibilities for the 21st century that gets past 20th century tropes. The passions of journalistic and intellectual debate have an atavistic tendency due to a lack of imagination that I would like to avoid in my own life and work.

Accelerationism looks new. It was pronounced in a manifesto, which is a good start.

Here is a quote from it:

Democracy cannot be defined simply by its means — not via voting, discussion, or general assemblies. Real democracy must be defined by its goal — collective self-​mastery. This is a project which must align politics with the legacy of the Enlightenment, to the extent that it is only through harnessing our ability to understand ourselves and our world better (our social, technical, economic, psychological world) that we can come to rule ourselves. We need to posit a collectively controlled legitimate vertical authority in addition to distributed horizontal forms of sociality, to avoid becoming the slaves of either a tyrannical totalitarian centralism or a capricious emergent order beyond our control. The command of The Plan must be married to the improvised order of The Network.

Hell yeah, the Enlightenment! Sign me up!

The manifesto calls for an end to the left’s emphasis on local action, transparency, and direct democracy. Rather, it calls for a muscular hegemonic left that fully employs and deploys “technoscience”.

It is good to be able to name this political tendency and distinguish it from other left tendencies. It is also good to distinguish it from “right accelerationism”, which Weissman identifies with billionaires who want to create exurb communities.

A left-accelerationist impulse is today playing out dramatically against a right-accelerationist one. And the right-accelerationists are about as dangerous as you may imagine. With silicon valley VCs, and libertarian technologists more generally reading Nick Land on geopolitical fragmentation, the reception or at least receptivity to hard-right accelerants seems problematically open (and the recent $2M campaign proposing the segmentation of California into six microstates seems to provide some evidence for this.) Billionaires consuming hard-right accelerationist materials arguing for hyper-secessionism undoubtedly amounts to a critically dangerous situation. I suspect that the right-accelerationist materials, perspectives, affect, energy expresses a similar shadow, if it is not partly what is catalyzing the resurgence of micro-fascisms elsewhere (and macro ones as well — perhaps most significant to my mind here is the overlap of right-acceleration with white nationalism, and more generally what is deplorably and disingenuously called “race realism” — and is of course simply racism; consider Marine le Pen’s fascist front, which recently won 25% of the seats in the French parliament, UKIP’s resurgence in Great Britain; while we may not hear accelerationist allegiances and watchwords explicitly, the political implications and continuity is at the very least somewhat unsettling…)

There is an unfortunate conflation of several different points of view here. It is too easy to associate racism, wealth, and libertarianism as these are the nightmares of the left’s political imagination. If ideological writing is therapeutic, a way of articulating ones dreams, then this is entirely appropriate with a caveat. The caveat being that every nightmare is a creation of ones own psychology more so than a reflection of the real world.

The same elisions are made by Sam Frank in his recent article thematizing Silicon Valley libertarianism, friendly artificial intelligence research, and contemporary rationalism as a self-help technique. There are interesting organizational ties between these institutions that are validly worth investigating but it would be lazy to collapse vast swathes of the intellectual spectrum into binaries.

In March 2013 I wrote about the Bay Area Rationalists:

There is a good story here, somewhere. If I were a journalist, I would get in on this and publish something about it, just because there is such a great opportunity for sensationalist exploitation.

I would like to say “I called it”–Sam Frank has recently written just such a sensationalist, exploitative piece in Harper’s Magazine. It is thoroughly enjoyable and I wouldn’t say it’s inaccurate. But I don’t think this is the best way to get to know these people. A better one is to attend a CFAR workshop. It used to be that you could avoid the fee with a promise to volunteer, but that there was a money-back guarantee which extended to ones promise to volunteer. If that’s still the case, then one can essentially attend for free.

Another way to engage this community intellectually, which I would encourage the left accelerationists to do because it’s interesting, is to start participating on LessWrong. For some reason this community is not subject to ideological raids like so many other community platforms. I think it could stand for an influx of Deleuze.

Ultimately the left/right divide comes down to a question of distribution of resources and/or surplus. Left accelerationist tactics appear from here to be a more viable way of seizing resources than direct democracy. However, the question is whether accelerationist tactics inevitably result in inequalities that create control structures of the kind originally objected to. In other words, this may simply be politics as usual and nothing radical at all.

So there’s an intersection between these considerations (accelerationist vs. … decelerationism? Capital accumulation vs. capital redistribution?) and the question of decentralization of decision-making process (is that the managerialism vs. multistakeholderism divide?) whose logic is unclear to me. I want to know which affinities are necessary and which are merely contingent.

Imre Lakatos and programming as dialectic

My dissertation is about the role of software in scholarly communication. Specifically, I’m interested in the way software code is itself a kind of scholarly communication, and how the informal communications around software production represent and constitute communities of scientists. I see science as a cognitive task accomplished by the sociotechnical system of science, including both scientists and their infrastructure. Looking particularly at scientist’s use of communications infrastructure such as email, issue trackers, and version control, I hope to study the mechanisms of the scientific process much like a neuroscientist studies the mechanisms of the mind by studying neural architecture and brainwave activity.

To get a grip on this problem I’ve been building BigBang, a tool for collecting data from open source projects and readying it for scientific analysis.

I have also been reading background literature to give my dissertation work theoretical heft and to procrastinate from coding. This is why I have been reading Imre Lakatos’ Proofs and Refutations (1976).

Proofs and Refutations is a brilliantly written book about the history of mathematical proof. In particular, it is an analysis of informal mathematics through an investigation of the letters written by mathematicians working on proofs about the Euler characteristic of polyhedra in the 18th and 19th centuries.

Whereas in the early 20th century, based on the work of Russel and Whitehead and others, formal logic was axiomatized, prior to this mathematical argumentation had less formal grounding. As a result, mathematicians would argue not just substantively about the theorem they were trying to prove or disprove, but also about what constitutes a proof, a conjecture, or a theorem in the first place. Lakatos demonstrates this by condensing 200+ years of scholarly communication into a fictional, impassioned classroom dialog where characters representing mathematicians throughout history banter about polyhedra and proof techniques.

What’s fascinating is how convincingly Lakatos presents the progress of mathematical understanding as an example of dialectical logic. Though he doesn’t use the word “dialectical” as far as I’m aware, he tells the story of the informal logic of pre-Russellian mathematics through dialog. But this dialog is designed to capture the timeless logic behind what’s been said before. It takes the reader through the thought process of mathematical discovery in abbreviated form.

I’ve had conversations with serious historians and ethnographers of science who would object strongly to the idea of a history of a scientific discipline reflecting a “timeless logic”. Historians are apt to think that nothing is timeless. I’m inclined to think that the objectivity of logic persists over time much the same way that it persists over space and between subjects, even illogical ones, hence its power. These are perhaps theological questions.

What I’d like to argue (but am not sure how) is that the process of informal mathematics presented by Lakatos is strikingly similar to that used by software engineers. The process of selecting a conjecture, then of writing a proof (which for Lakatos is a logical argument whether or not it is sound or valid), then having it critiqued with counterexamples, which may either be global (counter to the original conjecture) or local (counter to a lemma), then modifying the proof, then perhaps starting from scratch based on a new insight… all this reads uncannily like the process of debugging source code.

The argument for this correspondence is strengthened by later work in theory of computation and complexity theory. I learned this theory so long ago I forget who to attribute it to, but much of the foundational work in computer science was the establishment of a correspondence between classes of formal logic and classes of programming languages. So in a sense its uncontroversial within computer science to consider programs to be proofs.

As I write I am unsure whether I’m simply restating what’s obvious to computer scientists in an antiquated philosophical language (a danger I feel every time I read a book, lately) or if I’m capturing something that could be an interesting synthesis. But my point is this: that if programming language design and the construction of progressively more powerful software libraries is akin to the expanding of formal mathematical knowledge from axiomatic grounds, then the act of programming itself is much more like the informal mathematics of pre-Russellian mathematics. Specifically, in that it is unaxiomatic and proofs are in play without necessarily being sound. When we use a software system, we are depending necessarily on a system of imperfected proofs that we fix iteratively through discovered counterexamples (bugs).

Is it fair to say, then, that whereas the logic of software is formal, deductive logic, the logic of programming is dialectical logic?

Bear with me; let’s presume it is. That’s a foundational idea of my dissertation work. Proving or disproving it may or may not be out of scope of the dissertation itself, but it’s where it’s ultimately headed.

The question is whether it is possible to develop a formal understanding of dialectical logic through a scientific analysis of the software collaboration. (see a mathematical model of collective creativity). If this could be done, then we could then build better software or protocols to assist this dialectical process.

Follow

Get every new post delivered to your Inbox.

Join 960 other followers