Digifesto

Category: science

Varela’s modes of explanation and the teleonomic

I’m now diving deep into Francisco Varela’s Principles of Biological Autonomy (1979). Chapter 8 draws on his paper with Maturana, “Mechanism and biological explanation” (1972) (html). Chapter 9 draws heavily from his paper, “Describing the Logic of the Living: adequacies and limitations of the idea of autopoiesis” (1978) (html).

I am finding this work very enlightening. Somehow it bridges between my interests in philosophy of science right into my current work on privacy by design. I think I will find a way to work this into my dissertation after all.

Varela has a theory of different modes of explanation of phenomena.

One form of explanation is operational explanation. The categories used in these explanations are assumed to be components in the system that generated the phenomena. The components are related to each other in a causal and lawful (nomic) way. These explanations are valued by science because they are designed so that observers can best predict and control the phenomena under study. This corresponds roughly to what Habermas identifies as technical knowledge in Knowledge and Human Interests. In an operational explanation, the ideas of purpose or function have no explanatory value; rather the observer is free to employ the system for whatever purpose he or she wishes.

Another form of explanation is symbolic explanation, which is a more subtle and difficulty idea. It is perhaps better associated with phenomenology and social scientific methods that build on it, such as ethnomethodology. Symbolic explanations, Varela argues, are complementary to operational explanations and are necessary for a complete description of “living phenomenology”, which I believe Varela imagines as a kind of observer-inclusive science of biology.

To build up to his idea of the symbolic explanation, Varela first discusses an earlier form of explanation, now out of fashion: teleological explanation. Teleological explanations do not support manipulation, but rather “understanding, communication of intelligible perspective in regard to a phenomenal domain”. Understanding the “what for” of a phenomenon, what its purpose is, does not tell you how to control the phenomenon. While it may help regulate ones expectations, Varela does not see this as its primary purpose. Communicability motivates teleological explanation. This resonates with Habermas’s idea of hermeneutic knowledge, what is accomplished through intersubjective understanding.

Varela does not see these modes of explanation as exclusive. Operational explanations assume that “phenomena occur through a network of nomic (lawlike) relationships that follow one another. In the symbolic, communicative explanation the fundamental assumption is that phenomena occur through a certain order or pattern, but the fundamental focus of attention is on certain moments of such an order, relative to the inquiring community.” But these modes of explanation are fundamentally compatible.

“If we can provide a nomic basis to a phenomenon, an operational description, then a teleological explanation only consists of putting in parenthesis or conceptually abbreviating the intermediate steps of a chain of causal events, and concentrating on those patterns that are particularly interesting to the inquiring community. Accordingly, Pittendrich introduced the term teleonomic to designate those teleological explanations that assume a nomic structure in the phenomena, but choose to ignore intermediate steps in order to concentrate on certain events (Ayala, 1970). Such teleologic explanations introduce finalistic terms in an explanation while assuming their dependence in some nomic network, hence the name teleo-nomic.”

A symbolic explanation that is consistent with operational theory, therefore, is a teleonomic explanation: it chooses to ignore some of the operations in order to focus on relationships that are important to the observer. There are coherent patterns of behavior which the observer chooses to pay attention to. Varela does not use the word ‘abstraction’, as a computer scientist I am tempted to. But Varela’s domains of interest, however, are complex physical systems often represented as dynamic systems, not the kind of well-defined chains of logical operations familiar from computer programming. In fact, one of the upshots of Varela’s theory of the symbolic explanation is a criticism of naive uses of “information” in causal explanations that are typical of computer scientists.

“This is typical in computer science and systems engineering, where information and information processing are in the same category as matter and energy. This attitude has its roots in the fact that systems ideas and cybernetics grew in a technological atmosphere that acknowledged the insufficiency of the purely causalistic paradigm (who would think of handling a computer through the field equations of thousands of integrated circuits?), but had no awareness of the need to make explicit the change in perspective taken by the inquiring community. To the extent that the engineering field is prescriptive (by design), this kind of epistemological blunder is still workable. However, it becomes unbearable and useless when exported from the domain of prescription to that of description of natural systems, in living systems and human affairs.”

This form of critique makes its way into a criticism of artificial intelligence by Winograd and Flores, presumabley through the Chilean connection.

equilibrium representation

We must keep in mind not only the capacity of state simplifications to transform the world but also the capacity of the society to modify, subvert, block, and even overturn the categories imposed upon it. Here is it useful to distinguish what might be called facts on paper from facts on the ground…. Land invasions, squatting, and poaching, if successful, represent the exercise of de facto property rights which are not represented on paper. Certain land taxes and tithes have been evaded or defied to the point where they have become dead letters. The gulf between land tenure facts on paper and facts on the ground is probably greatest at moments of social turmoil and revolt. But even in more tranquil times, there will always be a shadow land-tenure system lurking beside and beneath the official account in the land-records office. We must never assume that local practice conforms with state theory. – Scott, Seeing Like a State, 1998

I’m continuing to read Seeing Like a State and am finding in it a compelling statement of a state of affairs that is coded elsewhere into the methodological differences between social science disciplines. In my experience, much of the tension between the social sciences can be explained in terms of the differently interested uses of social science. Among these uses are the development of what Scott calls “state theory” and the articulation, recognition, and transmission of “local practice”. Contrast neoclassical economics with the anthropology of Jean Lave as examples of what I’m talking about. Most scholars are willing to stop here: they choose their side and engage in a sophisticated form of class warfare.

This is disappointing from the perspective of science per se, as a pursuit of truth. To see where there’s a place for such work in the social sciences, we only have to the very book in front of us, Seeing Like a State, which stands outside of both state theory and local practices to explain a perspective that is neither but rather informed by a study of both.

In terms of the ways that knowledge is used in support of human interests, in the Habermasian sense (see some other blog posts), we can talk about Scott’s “state theory” as a form of technical knowledge, aimed at facilitating power over the social and natural world. What he discusses is the limitation of technical knowledge in mastering the social, due to complexity and differentiation in local practice. So much of this complexity is due to the politicization of language and representation that occurs in local practice. Standard units of measurement and standard terminology are tools of state power; efforts to guarantee them are confounded again and again in local interest. This disagreement is a rejection of the possibility of hermeneutic knowledge, which is to say linguistic agreement about norms.

In other words, Scott is pointing to a phenomenon where because of the interests of different parties at different levels of power, there’s a strategic local rejection of inter-subjective agreement. Implicitly, agreeing even on how to talk with somebody with power over you is conceding their power. The alternative is refusal in some sense. A second order effect of the complexity caused by this strategic disagreement is the confounding of technical mastery over the social. In Scott’s terminology, a society that is full of strategic lexical disagreement is not legible.

These are generalizations reflecting tendencies in society across history. Nevertheless, merely by asserting them I am arguing that they have a kind of special status that is not itself caught up in the strategic subversions of discourse that make other forms of expertise foolish. There must be some forms of representation that persist despite the verbal disagreements and differently motivated parties that use them.

I’d like to call these kinds of representations, which somehow are technically valid enough to be useful and robust to disagreement, even politicized disagreement, as equilibrium representations. The idea here is that despite a lot of cultural and epistemic churn, there are still attractor states in the complex system of knowledge production. At equilibrium, these representations will be stable and serve as the basis for communication between different parties.

I’ve posited equilibrium representations hypothetically, without having a proof or example yet on one that actually exists. My point is to have a useful concept that acknowledges the kinds of epistemic complexities raised by Scott but that acknowledges the conditions for which a modernist epistemology could prevail despite those complexities.

 

Seeing Like a State: problems facing the code rural

I’ve been reading James C. Scott’s Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed for, once again, Classics. It’s just as good as everyone says it is, and in many ways the counterpoint to James Beniger’s The Control Revolution that I’ve been looking for. It’s also highly relevant to work I’m doing on contextual integrity in privacy.

Here’s a passage I read on the subway this morning that talks about the resistance to codification of rural land use customs in Napoleonic France.

In the end, no postrevolutionary rural code attracted a winning coalition, even amid a flurry of Napoleonic codes in nearly all other realms. For our purposes, the history of the stalemate is instructive. The first proposal for a code, which was drafted in 1803 and 1807, would have swept away most traditional rights (such as common pasturage and free passage through others’ property) and essentially recast rural property relations in the light of bourgeois property rights and freedom of contract. Although the proposed code pefigured certain modern French practices, many revolutionaries blocked it because they feared that its hands-off liberalism would allow large landholders to recreate the subordination of feudalism in a new guise.

A reexamination of the issue was then ordered by Napoleon and presided over by Joseph Verneilh Puyrasseau. Concurrently, Depute Lalouette proposed to do precisely what I supposed, in the hypothetical example, was impossible. That is, he undertook to systematically gather information about all local practices, to classify and codify them, and then to sanction them by decree. The decree in question would become the code rural. Two problems undid this charming scheme to present the rural poplace with a rural code that simply reflected its own practices. The first difficulty was in deciding which aspects of the literally “infinite diversity” or rural production relations were to be represented and codified. Even if a particular locality, practices varied greatly from farm to farm over time; any codification would be partly arbitrary and artificially static. To codify local practices was thus a profoundly political act. Local notables would be able to sanction their preferences with the mantle of law, whereas others would lose customary rights that they depended on. The second difficulty was that Lalouette’s plan was a mortal threat to all state centralizers and economic modernizers for whom a legible, national property regime was the procondition of progress. As Serge Aberdam notes, “The Lalouette project would have brought about exactly what Merlin de Douai and the bourgeois, revolutionary jurists always sought ot avoid.” Neither Lalouette nor Verneilh’s proposed code was ever passed, because they, like their predecessor in 1807, seemed to be designed to strengthen the hand of the landowners.

(Emphasis mine.)

The moral of the story is that just as the codification of a land map will be inaccurate and politically contested for its biases, so too a codification of customs and norms will suffer the same fate. As Borges’ fable On Exactitude in Science mocks the ambition of physical science, we might see the French attempts at code rural to be a mockery of the ambition of computational social science.

On the other hand, Napoleonic France did not have the sweet ML we have today. So all bets are off.

three kinds of social explanation: functionalism, politics, and chaos

Roughly speaking, I think there are three kinds of social explanation. I mean “explanation” in a very thick sense; an explanation is an account of why some phenomenon is the way it is, grounded in some kind of theory that could be used to explain other phenomena as well. To say there are three kinds of social explanation is roughly equivalent to saying there are three ways to model social processes.

The first of these kind of social explanation is functionalism. This explains some social phenomenon in terms of the purpose that it serves. Generally speaking, fulfilling this purpose is seen as necessary for the survival or continuation of the phenomenon. Maybe it simply is the continued survival of the social organism that is its purpose. A kind of agency, though probably very limited, is ascribed to the entire social process. The activity internal to the process is then explained by the purpose that it serves.

The second kind of social explanation is politics. Political explanations focus on the agencies of the participants within the social system and reject the unifying agency of the whole. Explanations based on class conflict or personal ambition are political explanations. Political explanations of social organization make it out to be the result of a complex of incentives and activity. Where there is social regularity, it is because of the political interests of some of its participants in the continuation of the organization.

The third kind of social explanation is hardly an explanation at all. It is explanation by chaos. This sort of explanation is quite rare, as it does not provide much of the psychological satisfaction we like from explanations. I mention it here because I think it is an underutilized mode of explanation. In large populations, much of the activity that happens will do so by chance. Even large organizations may form according to stochastic principles that do not depend on any real kind of coordinated or purposeful effort.

It is important to consider chaotic explanation of social processes when we consider the limits of political expertise. If we have a low opinion of any particular person’s ability to understand their social environment and act strategically, then we must accept that much of their “politically” motivated actions will be based on misconceptions and therefore be, in an objective sense, random. At this point political explanations become facile, and social regularity has to be explained either in terms of the ability of social organizations qua organizations to survive, or the organization must be explained in a deflationary way: i.e., that the organization is not really there, but just in the eye of the beholder.

Loving Tetlock’s Superforecasting: The Art and Science of Prediction

I was a big fan of Philip Tetlock’s Expert Political Judgment (EPJ). I read it thoroughly; in fact a book review of it was my first academic publication. It was very influential on me.

EPJ is a book that is troubling to many political experts because it basically says that most so-called political expertise is bogus and that what isn’t bogus is fairly limited. It makes this argument with far more meticulous data collection and argumentation than I am able to do justice to here. I found it completely persuasive and inspiring. It wasn’t until I got to Berkeley that I met people who had vivid negative emotional reactions to this work. They seem to mainly have been political experts who do not having their expertise assessed in terms of its predictive power.

Superforecasting: The Art and Science of Prediction (2016) is a much more accessible book that summarizes the main points from EPJ and then discusses the results of Tetlock’s Good Judgment Project, which was his answer to an IARPA challenge in forecasting political events.

Much of the book is an interesting history of the United States Intelligence Community (IC) and the way its attitudes towards political forecasting have evolved. In particular, the shock of the failure of the predictions around Weapons of Mass Destruction that lead to the Iraq War were a direct cause of IARPA’s interest in forecasting and their funding of the Good Judgment Project despite the possibility that the project’s results would be politically challenging. IARPA comes out looking like a very interesting and intellectually honest organization solving real problems for the people of the United States.

Reading this has been timely for me because: (a) I’m now doing what could be broadly construed as “cybersecurity” work, professionally, (b) my funding is coming from U.S. military and intelligence organizations, and (c) the relationship between U.S. intelligence organizations and cybersecurity has been in the news a lot lately in a very politicized way because of the DNC hacking aftermath.

Since so much of Tetlock’s work is really just about applying mathematical statistics to the psychological and sociological problem of developing teams of forecasters, I see the root of it as the same mathematical theory one would use for any scientific inference. Cybersecurity research, to the extent that it uses sound scientific principles (which it must, since it’s all about the interaction between society, scientifically designed technology, and risk), is grounded in these same principles. And at its best the U.S. intelligence community lives up to this logic in its public service.

The needs of the intelligence community with respect to cybersecurity can be summed up in one word: rationality. Tetlock’s work is a wonderful empirical study in rationality that’s a must-read for anybody interested in cybersecurity policy today.

the end of narrative in social science

‘Narrative’ is a term you hear a lot in the humanities, the humanities-oriented social sciences, and in journalism. There’s loads of scholarship dedicated to narrative. There’s many academic “disciplines” whose bread and butter is the telling of a good story, backed up by something like a scientific method.

Contrast this with engineering schools and professions, where the narrative is icing on the cake if anything at all. The proof of some knowledge claim is in its formal logic or operational efficacy.

In the interdisciplinary world of research around science, technology, and society, the priority of narrative is one of the major points of contention. This is similar to the tension I found I encountered in earlier work on data journalism. There are narrative and mechanistic modes of explanation. The mechanists are currently gaining in wealth and power. Narrativists struggle to maintain their social position in such a context.

A struggle I’ve had while working on my dissertation is trying to figure out how to narrate to narrativists a research process that is fundamentally formal and mechanistic. My work is “computational social science” in that it is computer science applied to the social. But in order to graduate from my department I have to write lots of words about how this ties in to a universe of academic literature that is largely by narrativists. I’ve been grounding my work in Pierre Bourdieu because I think he (correctly) identifies mathematics as the logical heart of science. He goes so far as to argue that mathematics should be at the heart of an ideal social science or sociology. My gloss on this after struggling with this material both theoretically and in practice is that narratively driven social sciences will always be politically or at least perspectivally inflected in ways that threaten the objectivity of the results. Narrativists will try to deny the objectivity of mathematical explanation, but for the most part that’s because they don’t understand the mathematical ambition. Most mathematicians will not go out of their way to correct the narrativists, so this perception of the field persists.

So I was interested to discover in the work of Miller McPherson, the sociologist who I’ve identified as the bridge between traditional sociology and computational sociology (his work gets picked up, for example, in the generative modeling of Kim and Leskovec, which is about as representative of the new industrial social science paradigm as you can get), an admonition about the consequences of his formally modeled social network formation process (the Blau space, which is very interesting). His warning is that the sociology his work encourages loses narrative and with it individual agency.

IMG_20160506_160149

(McPherson, 2004, “A Blau space primer: prolegomenon to an ecology of affiliation”)

It’s ironic that the whole idea of a Blau space, which is that the social network of society is sampled from an underlying multidimensional space of demographic dimensions, predicts the quantitative/qualitative divide in academic methods as not just a methodological difference but a difference in social groups. The formation of ‘disciplines’ is endogenous to the greater social process and there isn’t much individual agency in this choice. This lack of agency is apparent, perhaps, to the mathematicians and a constant source of bewilderment and annoyance, perhaps, to the narrativists who will insist on the efficacy of a narratively driven ‘politics’–however much this may run counter to the brute fact of the industrial machine–because it is the position that rationalizes and is accessible from their subject position in Blau space.

“Subject position in Blau space” is basically the same idea, in more words, as the Bourdieusian habitus. So, nicely, we have a convergence between French sociological grand theory and American computational social science. As the Bourdieusian theory provides us with a serviceable philosophy of science grounded in sociological reality of science, we can breathe easily and accept the correctness of technocratic hegemony.

By “we” here I mean…ah, here’s the rub. There’s certainly a class of people who will resist this hegemony. They can be located easily in Blau space. I’ve spent years of my life now trying to engage with them, persuading them of the ideas that rule the world. But this turns out to be largely impossible. It’s demanding they cross too much distance, removes them from their local bases of institutional support and recognition, etc. The “disciplines” are what’s left in the receding tide before the next oceanic wave of the unified scientific field. Unified by a shared computational logic, that is.

What is at stake, really, is logic.

programming and philosophy of science

Philosophy of science is a branch of philosophy largely devoted to the demarcation problem: what is science?

I’ve written elsewhere about why and how in the social sciences, demarcation is highly politicized and often under attack. This is becoming pertinent now especially as computational methods become dominant across many fields and challenge the bases of disciplinary distinction. Today, a lot of energy (at UC Berkeley at least) goes into maintaining the disciplinary social sciences even when this creates social fields that are less scientific than they could be in order to maintain atavistic disciplinary traits.

Other energy (also at UC Berkeley, and elsewhere) goes into using computer programs to explore data about the social world in an undisciplinary way. This isn’t to say that specific theoretical lenses don’t inform these studies. Rather, the lenses are used provisionally and not in an exclusive way. This lack of disciplinary attachment is an important aspect of data science as applied to the social world.

One reason why disciplinary lenses are not very useful for the practicing data scientist is that, much like natural scientists, data scientists are more often than not engaged in technical inquiry whose purpose is prediction and control. This is very different from, for example, engaging an academic community in a conversation in a language they understand or that pays appropriate homage to a particular scholarly canon–the sort of thing one needs to do to be successful in an academic context. For much academic work, especially in the social sciences, the process of research publication, citation, and promotion is inherently political.

These politics are more often than not not an essential function to scientific inquiry itself; rather they have to do with the allocation of what Bourdieu calls temporal capital: grant funding, access, appointments, etc. within the academic field. Scientific capital, that symbolic capital awarded to scientists based on their contributions to trans-historical knowledge, is awarded more based on the success of an idea than by, for example, brown-nosing ones superiors. However, since temporal capital in the academy is organized by disciplines as a function of university bureaucratic organization, academic researchers are required to contort themselves to disciplinary requirements in the presentation of their work.

Contrast this with the work of analysing social data using computers. The tools used by computational social scientists tend to be products of the exact sciences (mathematics, statistics, computer science) with no further disciplinary baggage. The intellectual work of scientifically devising and testing theories against the data happens in a language most academic communities would not recognize as a language at all, and certainly not their language. While this work depends on the work of thousands of others who have built vast libraries of functional code, these ubiquitous contributors are not included in an social science discipline’s scholarly canon. They are uncited, taken for granted.

However, when those libraries are made openly available (and they often are), they participate in a larger open source ecosystem of tools whose merits are judged by their practical value. Returning to our theme of the demarcation problem, the question is: is this science?

I would answer: emphatically yes. Programming is science because, as Peter Naur has argued, programming is theory building (hat tip the inimitable Spiros Eliopoulos for the reference). The more deeply we look into the demarcation problem, the more clearly software engineering practice comes into focus as an extension of a scientific method of hypothesis generation and testing. Software is an articulation of ideas, and the combined works of software engineers are a cumulative science that has extended far beyond the bounds of the university.

Jung and Bourdieu as an improvement upon Freud and Habermas

I have written in this blog and in published work about Habermas and his Frankfurt School precursor, Horkheimer. Based on this writing, a thorough reader (of whom I expect there to be approximately zero) might conclude that I am committed to a Habermasian view.

I’d like to log a change of belief based on recent readings of Pierre Bourdieu and Carl Jung.

Why Bourdieu and Jung? Because Frankfurt School social theory was based on a Freudian view of psychology. This Freudian origin manifests itself in the social theory in ways that I’ll try to outline below. However, in my own therapeutic experience as well as many more informal encounters with Jungian theory, I find the latter to be much more compelling. As I’ve begun reading Jung’s Man and His Symbols, I see now where Jung explicitly departed from Freud, enriching his theory. These departures are far more consistent with a Bourdieusian view of society. (I’ve noted the potential synergy here).

Let me try to be clearer about what this change in perspective entails:

For Freud, man has an irrational nature and a rational ego. The purpose of therapy is the maintenance of rational control. Horkheimer’s critique of modern society invoked Freud in his discussion of the revolt of nature: society rationalizes itself and the individuals within it; the ‘nature’ of the individuals that is excluded (repressed, really) by this rationalization manifests itself in ugly ways. Habermas, who is less pessimistic about society, still sees morality in terms of social norms grounded in rational consensus. “Rational consensus” as a concept angers or worries postmodern and poststructural critics who see this principle as a basis for social ethics as exclusionary.

For Jung, the therapeutic relationship absolutely must not be about the imposition of the therapist’s views on the patient; psychological progress must come from within the individual patient. He documents an encounter between himself and Freud where he discovers this; he is very convincing. The Jungian unconscious is a collective stock of symbols, as an alternative to a Freudian subconscious of nature repressed by ego. The Jungian ego, therefore, is a much more flexible subject; at times it seems that Jung is nostalgic for a more irrational, perhaps primitive, consciousness. But more importantly, Jung explicitly rejects the idea of a society’s sanity being about its adherence to shared rational norms. Instead, he opts for a more Durkheimian view of social variety:

Can we make any sort of objective judgment about the final result [of therapy]? Only if we make a comparison between our conclusions and the standards that are generally valid in the social milieu to which the individuals below. Even then, we must take into account the mental equilibrium (or “sanity”) of the individual concerned. For the result cannot be a completely collective leveling out of the individual to adjust him to the “norms” of his society. This would amount to a most unnatural condition. A sane and normal society is one in which people habitually disagree, because general agreement is relatively rare outside the sphere of instinctive human qualities.

A diverse society of habitual disagreement accords much better with the Bourdieusian view of a society variously inflected as habitus than it does with a Habermasian view of one governed by rational norms.

There’s a subtlety that I’ve missed again and again which I’d like to put my finger on now.

The problem with the early Habermasian view is that ethics are determined through rational consensus. So, individuals participate in a public sphere and agree, as individuals, on norms that govern their individual behavior.

Later Habermas (say, volume two of Theory of Communicative Action) begins to acknowledge the information overhead of of this approach and discusses the rise of bureaucracy and its technicization. In lieu of a bona fide consensus of the lifeworld, one gets a rational coalescence of norms into law.

Effectively, this means that while the general population can be irrational in various ways (relative to the perspective of the law), what’s important is that lawmakers create law through a rational process that is inclusive of diverse perspectives.

We see a similar view in Bourdieu’s view of science: it is a specific habitus whose legitimacy is due to the trans-historical robustness of its mathematized formulations.

The conclusion is this: scientists and lawmakers have to approach rationality in specific trans-personal and trans-historical ways. In fact, the rationality of science or of law are only achieved systemically, through the generalized process of science or lawmaking, not through the finite perspectives of their participants, however individually rational they may be. But the general population need not be rational like this for society to be ‘sane’. Rather, individual habitus or partial perspective can vary across a society that is nevertheless coordinated by rational principle.

There is bound to be friction at the boundary between the institutions of science and law and the more diverse publics that surround and intersect them. Donna Haraway’s ‘privilege of partial perspectives‘ is a good example of the symptoms of this friction. A population that is excluded from science–not represented well within science–may react against it by reasserting it’s ‘partial perspective’ as a viable alternative. This is a kind of refusal, in the sense perhaps originated by Marcuse and more recently resurfaced in Michael Dumas’ work on antiblackness. Refusal is, perhaps sadly, delusional and seems to recur as a failed and failing project; but it is sociologically robust precisely because in late modernism the hegemonic rationality allows for Dukheimian social differentiation. The latter is actually the triumph of liberalism over, for example, racist facism; Fred Turner’s The Democratic Surround is a nice historical work documenting how this order of scientifically managed diversity was a deliberate United States statebuilding project in World War II.

If a top-down rationalizing control creates as a symptom pathological refusal–another manifestation perhaps of the ‘revolt of nature‘–a Jungian view of rationality as psychic integration perhaps provides a more palatable alternative. Jungian development is accomplished through personalized, situated education. However, through this education, the individual flourishes through a transcendence of their more limited, narrow sense of self. Jungian therapy/education transcends even gender, as the male and female are encouraged to recognize the feminine “anima” and masculine “animus” aspects of their psyches, respectively. Fully developed individuals–who one would expect to occupy, over the course of their development, a somewhat shared habitus–seem to therefore get along better with each other, agreeing to disagree as the recognize how their differences are based on arbitrary social differentiation. Nothing about this agreeing-to-disagree on matters of, for example, taste precludes an agreement on serious trans-personal matters such as science or law. There need not be any resentment towards this God’s Eye View, since it is recognized by each educated individual as manifest in their own role in the social order.

Societal conditions may fall short of this ideal. However, the purpose of social theory is to provide a realizable social telos. Grounding it in a psychological theory that admits the possibility of realized psychological health is a good step forward.

Bourdieu and Horkheimer; towards an economy of control

It occurred to me as I looked over my earliest notes on Horkheimer (almost a year ago!) that Bourdieu’s concept of science as being a social field that formalizes and automates knowledge is Horkheimer’s idea of hell.

The danger Horkheimer (and so many others) saw in capitalist, instrumentalized, scientific society was that it would alienate and overwhelm the individual.

It is possible that society would alienate the individual anyway, though. For example, in the household of antiquity, were slaves unalienated? The privilege of autonomy is one that has always been rare but disproportionately articulated as normal, even a right. In a sense Western Democracies and Republics exist to guarantee autonomy to their citizens. In late modern democracies, autonomy is variable depending on role in society, which is tied to (economic, social, symbolic, etc.) capital.

So maybe the horror of Horkheimer, alienated by scientific advance, is the horror of one whose capital was being devalued by science. His scholarship, his erudition, were isolated and deemed irrelevant by the formal reasoners who had come to power.

As I write this, I am painfully aware that I have spent a lot of time in graduate school reading books and writing about them when I could have been practicing programming and learning more mathematics. My aspirations are to be a scientist, and I am well aware that that requires one to mathematically formalize ones findings–or, equivalently, to program them into a computer. (It goes without saying that computer programming is formalism, is automation, and so its central role in contemporary science or ‘data science’ is almost given to it by definition. It could not have been otherwise.)

Somehow I have been provoked into investing myself in a weaker form of capital, the benefit of which is the understanding that I write here, now.

Theoretically, the point of doing all this work is to be able to identify a societal value and formalize it so that it can be capture in a technical design. Perhaps autonomy is this value. Another might call it freedom. So once again I am reminded of Simone de Beauvoir’s philosophy of science, which has been correct all along.

But perhaps de Beauvoir was naive about the political implications of technology. Science discloses possibilities, the opportunities are distributed unequally because science is socially situated. Inequality leads to more alienation, not less, for all but the scientists. Meanwhile autonomy is not universally valued–some would prefer the comforts of society, of family structure. If free from society, they would choose to reenter it. Much of ones preferences must come from habitus, no?

I am indeed reaching the limits of my ability to consider the problem discursively. The field is too multidimensional, too dynamic. The proper next step is computer simulation.