Digifesto

Category: ideawork

ideologies of capitals

A key idea of Bourdieusian social theory is that society’s structure is due to the distribution of multiple kinds of capital. Social fields have their roles and their rules, but they are organized around different forms of capital the way physical systems are organized around sources of force like mass and electrical charge. Being Kantian, Bourdieusian social theory is compatible with both positivist and phenomenological forms of social explanation. Phenomenological experience, to the extent that it repeats itself and so can be described aptly as a social phenomenon at all, is codified in terms of habitus. But habitus is indexed to its place within a larger social space (not unlike, it must be said, a Blau space) whose dimensions are the dimensions of the allocations of capital throughout it.

While perhaps not strictly speaking a corollary, this view suggests a convenient methodological reduction, according to which the characteristic beliefs of a habitus can be decomposed into components, each component representing the interests of a certain kind of capital. When I say “the interests of a capital”, I do mean the interests of the typical person who holds a kind of capital, but also the interests of a form of capital, apart from and beyond the interests of any individual who carries it. This is an ontological position that gives capital an autonomous social life of its own, much like we might attribute an autonomous social life to a political entity like a state. This is not the same thing as attributing to capital any kind of personhood; I’m not going near the contentious legal position that corporations are people, for example. Rather, I mean something like: if we admit that social life is dictated in part by the life cycle of a kind of psychic microorganism, the meme, then we should also admit abstractly of social macroorganisms, such as capitals.

What the hell am I talking about?

Well, the most obvious kind of capital worth talking about in this way is money. Money, in our late modern times, is a phenomenon whose existence depends on a vast global network of property regimes, banking systems, transfer protocols, trade agreements, and more. There’s clearly a naivete in referring to it as a singular or homogeneous phenomenon. But it is also possible to referring to in a generic globalized way because of the ways money markets have integrated. There is a sense in which money exists to make more money and to give money more power over other forms of capital that are not money, such as: social authority based on any form of seniority, expertise, lineage; power local to an institution; or the persuasiveness of an autonomous ideal. Those that have a lot of money are likely to have an ideology very different from those without a lot of money. This is partly due to the fact that those who have a lot of money will be interested in promoting the value of that money over and above other capitals. Those without a lot of money will be interested inn promoting forms of power that contest the power of money.

Another kind of capital worth talking about is cosmopolitanism. This may not be the best word for what I’m pointing at but it’s the one that comes to mind now. What I’m talking about is the kind of social capital one gets not by having a specific mastery of a local cultural form, but rather by having the general knowledge and cross-cultural competence to bridge across many different local cultures. This form of capital is loosely correlated with money but is quite different from it.

A diagnosis of recent shifts in U.S. politics, for example, could be done in terms of the way capital and cosmopolitanism have competed for control over state institutions.

Advertisements

second-order cybernetics

The mathematical foundations of modern information technology are:

  • The logic of computation and complexity, developed by Turing, Church, and others. These mathematics specify the nature and limits of the algorithm.
  • The mathematics of probability and, by extension, information theory. These specify the conditions and limitations of inference from evidence, and the conditions and limits of communication.

Since the discovery of these mathematical truths and their myriad application, there have been those that have recognized that these truths apply both to physical objects, such as natural life and artificial technology, and also to lived experience, mental concepts, and social life. Humanity and nature obey the same discoverable, mathematical logic. This allowed for a vision of a unified science of communication and control: cybernetics.

There have been many intellectual resistance to these facts. One of the most cogent is Understanding Computers and Cognition, by Terry Winograd and Fernando Flores. Terry Winograd is the AI professor who advised the founders of Google. His credentials are beyond question. And so the fact that he coauthored a critique of “rationalist” artificial intelligence with Fernando Flores, Chilean entrepreneur, politician, and philosophy PhD , is significant. In this book, the two authors base their critique of AI on the work of Humberto Maturana, a second-order cyberneticist who believed that life’s organization and phenomenology could be explained by a resonance between organism and environment, structural coupling. Theories of artificial intelligence are incomplete when not embedded in a more comprehensive theory of the logic of life.

I’ve begun studying this logic, which was laid out by Francisco Varela in 1979. Notably, like the other cybernetic logics, it is an account of both physical and phenomenological aspects of life. Significantly Varela claims that his work is a foundation for an observer-inclusive science, which addresses some of the paradoxes of the physicist’s conception of the universe and humanity’s place in it.

My hunch is that these principles can be applied to social scientific phenomena as well, as organizations are just organisms bigger than us. This is a rather strong claim and difficult to test. However, it seems to me after years of study the necessary conclusion of available theory. It also seems consistent with recent trends in economics towards complexity and institutional economics, and the intuition that’s now rather widespread that the economy functions as a complex ecosystem.

This would be a victory for science if we could only formalize these intuitions well enough to either make these theories testable, or to be so communicable as to be recognized as ‘proved’ by any with the wherewithal to study it.

Protected: on intellectual sincerity

This content is password protected. To view it please enter your password below:

Habitus Shadow

In Bourdieu’s sociological theory, habitus refers to the dispositions of taste and action that individuals acquired as a practical consequence of their place in society. Society provides a social field (a technical term for Bourdieu) of structured incentives and roles. Individuals adapt to roles rationally, but in doing so culturally differentiate themselves. This process is dialectical, hence neither strictly determined by the field nor by individual rational agency, but a co-creation of each. One’s posture, one’s preference for a certain kind of music, one’s disposition to engage in sports, one’s disposition to engage in intellectual debate, are all potentially elements of a habitus.

In Jungian psychoanalytic theory, the shadow is the aspect of personality that is unconscious and not integrated with the ego–what one consciously believes oneself to be. Often it is the instinctive or irrational part of one’s psychology. An undeveloped psyche is likely to see his or her own shadow aspect in others and judge them harshly for it; this is a form of psychological projection motivated by repression for the sake of maintaining the ego. Encounters with the shadow are difficult. Often they are experienced as the awareness or suspicion of some new information that threatens ones very sense of self. But these encounters are, for Jung, an essential part of individuation, as they are how the personality can develop a more complete consciousness of itself.

Perhaps you can see where this is going.

I propose a theoretical construct: habitus shadow.

When an individual, situated within a social field, develops a habitus, they may do so with an incomplete consciousness of the reasons for their preferences and dispositions for action. An ego, a conscious rationalization, will develop; it will be reinforced by others who share its habitus. The dispositions of a habitus will include the collectively constructed ego of its members, which is itself a psychological disposition.

We would then expect that a habitus has a characteristic shadow: truths about the sociological conditions of a habitus which are not part of the conscious self-indentity or ego of that habitus.

This is another way to talk about what I have discussed elsewhere as an ideological immune reaction. If an idea or understanding is so challenging or destructive to the ego of a habitus that it calls into question the rationality of it’s very existence, then the role will be able to maintain itself only through a kind of repression/projection/exclusion. Alternatively, if the habitus can assimilate its shadow, one could see that as a form of social self-transcendence or progress.

late modern social epistemology round up; technical vs. hermeneutical correctness

Consider on the one hand what we might call Habermasian transcendental pragmatism, according to which knowledge can be categorized by how it addresses one of several generalized human interests:

  • The interest of power over nature or other beings, being technical knowledge
  • The interest of agreement with others for the sake of collective action, being hermeneutic knowledge
  • The interest of emancipation from present socially imposed conditions, being critical or reflexive knowledge

Consider in contrast what we might call the Luhmann or Foucault model in which knowledge is created via system autopoeisis. Luhmann talks about autopoeisis in a social system; Foucault talks about knowledge in a system of power much the same way.

It is difficult to reconcile these views. This may be what was at the heart of the Habermas-Luhmann debate. Can we parse out the problem in any way that helps reconcile these views?

First, let’s consider the Luhmann view. We might ease the tension in it by naming what we’ve called “knowledge” something like “belief”, removing the implication that the belief is true. Because indeed autopoeisis is a powerful enough process that it seems like it would preserve all kinds of myths and errors should they be important to the survival of the system in which they circulate.

This picture of knowledge, which we might call evolutionary or alternately historicist, is certainly a relativist one. At the intersection of institutions within which different partial perspectives are embedded, we are bound to see political contest.

In light of this, Habermas’s categorization of knowledge as what addresses generalized human interests can be seen as a way of identifying knowledge that transcends particular social systems. There is a normative component of this theory–knowledge should be such a thing. But there is also a descriptive component. One predicts, under Habermas’s hypothesis, that the knowledge that survives political contest at the intersection of social systems is that which addresses generalized interests.

Something I have perhaps overlooked in the past is the importance of the fact that there are multiple and sometimes contradictory general interests. One persistent difficulty in the search for truth is the conflict between what is technically correct and what is hermeneutically correct.

If a statement or theory is technically correct, then it can be reliably used by agents to predict and control the world. The objects of this prediction and control can be objects, or they can be other agents.

If a statement or theory is hermeneutically correct, then it is the reliable consensus of agents involved in a project of mutual understanding and respect. Hermeneutically correct beliefs might stress universal freedom and potential, a narrative of shared history, and a normative goal of progress against inequality. Another word for ‘hermeneutic’ might be ‘political’. Politically correct knowledges are those shared beliefs without which the members of a polity would not be able to stand each other.

In everyday discourse we can identify many examples of statements that are technically correct but hermeneutically (or politically) incorrect, and vice versa. I will not enumerate them here. In these cases, the technically correct view is identified as “offensive” because in a sense it is a defection from a voluntary social contract. Hermeneutic correctness binds together a particular social system by capturing what participants must agree upon in order for all to safely participate. For a member of that social system to assert their own agency over others, to identify ways in which others may be predicted and controlled without their consent or choice in the matter, is disrespectful. Persistent disrespect results in the ejection of the offender from the polity. (c.f. Pasquale’s distinction between “California engineers and New York quants” and “citizens”.)

A cruel consequence of these dynamics is social stratification based on the accumulation of politically forbidden technical knowledge.

We can tell this story again and again: A society is bound together by hermeneutically stable knowledge–an ideology, perhaps. Somebody ‘smart’ begins experimentation and identifies a technical truth that is hermeneutically incorrect, meaning that if the idea were to spread it would erode the consensus on which the social system depends. Perhaps the new idea degrades others by revealing that something believed to be an act of free will is, in fact, determined by nature. Perhaps the new idea is inaccessible to others because it depends on some rare capacity. In any case, it cannot be willfully consented to by the others.

The social system begins to have an immune reaction. Society has seen this kind of thing before. Historically, this idea has lead to abuse, exploitation, infamy. Those with forbidden knowledge should be shunned, distrusted, perhaps punished. Those with disrespectful technical ideas are discouraged from expressing them.

Technical knowledge thereby becomes socially isolated. Seeking out its own, it becomes concentrated. Already shunned by society, the isolated technologists put their knowledge to use. They gain advantage. Revenge is had by the nerds.

repopulation as element in the stability of ideology

I’m reading the fourth section of Foucault’s Discipline and Punish, about ‘Prison’, for the first time for I School Classics

A striking point made by Foucault is that while we may think there is a chronology of the development of penitentiaries whereby they are designed, tested, critiqued, reformed, and so on, until we get a progressively improved system, this is not the case. Rather, at the time of Foucault’s writing, the logic of the penitentiary and its critiques had happily coexisted for a hundred and fifty years. Moreover, the failures of prisons–their contribution to recidivism and the education and organization of delinquents, for example–could only be “solved” by the reactivation of the underlying logic of prisons–as environments of isolation and personal transformation. So prison “failure” and “solution”, as well as (often organized) delinquency and recidivism, in addition to the architecture and administration of prison, are all part of the same “carceral system” which endures as a complex.

One wonders why the whole thing doesn’t just die out. One explanation is repopulation. People are born, live for a while, reproduce, live a while longer, and die. In the process, they must learn through education and experience. It’s difficult to rush personal growth. Hence, systematic errors that are discovered through 150 years of history are difficult to pass on, as each new generation will be starting from inherited priors (in the Bayesian sense) which may under-rank these kinds of systemic effects.

In effect, our cognitive limitations as human beings are part of the sociotechnical systems in which we play a part. And though it may be possible to grow out of such a system, there is a constant influx of the younger and more naive who can fill the ranks. Youth captured by ideology can be moved by promises of progress or denunciations of injustice or contamination, and thus new labor is supplied to turn the wheels of institutional machinery.

Given the environmental in-sustainability of modern institutions despite their social stability under conditions of repopulation, one has to wonder…. Whatever happened to the phenomenon of eco-terrorism?

inequality and alienation in society

While helpful for me, this blog post got out of hand. A few core ideas from it:

A prerequisite for being a state is being a stable state. (cf. Bourgine and Varella on autonomy)

A state may be stable (“power stable”) without being legitimate (“inherently stable” or “moral stable”).

State and society are intertwined and I’ll just conflate them here.

Under liberal ideology, society is society of individual producers and the purpose of the state is to guarantee “liberty, property, and equality.”

So specifically, (e.g. economic) inequality is a source of moral instability for liberalism.

Whether or not moral instability leads to destabilization of the state is a matter of empirical prediction. Using that as a way of justifying liberalism in the first place is probably a non-starter.

A different but related problem is the problem of alienation. Alienation happens when people don’t feel like they are part of the institutions that have power over them.

[Hegel’s philosophy is a good intellectual starting point for understanding alienation because Hegel’s logic was explicitly mereological, meaning about the relationship between parts and wholes.]

Liberal ideology effectively denies that individuals are part of society and therefore relies on equality for its moral stability.

But there are some reasons to think that this is untenable:

As society scales up, we require more and more apparatus to manage the complexity of societal integration. This is where power lies, and it creates a ruling bureaucratic or (now, increasingly) technical class. In other words, it may be impossible to for society to both be scalable and equal, in terms of distribution of goods.

Moreover, the more “technical” the apparatus of social integration is, the more remote it is from the lived experiences of society. As a result, we see more alienation in society. One way to think about alienation is inequality in the distribution of power or autonomy. So popular misgivings about how control has been ceded to algorithms are an articulation of alienation, though that word is out of fashion.

Inequality is a source of moral instability under liberal ideology. Under what conditions is alienation a source of moral stability?

autonomy and immune systems

Somewhat disillusioned lately with the inflated discourse on “Artificial Intelligence” and trying to get a grip on the problem of “collective intelligence” with others in the Superintelligence and the Social Sciences seminar this semester, I’ve been following a lead (proposed by Julian Jonker) that perhaps the key idea at stake is not intelligence, but autonomy.

I was delighted when searching around for material on this to discover Bourgine and Varela’s “Towards a Practice of Autonomous Systems” (pdf link) (1992). Francisco Varela is one of my favorite thinkers, though he is a bit fringe on account of being both Chilean and unafraid of integrating Buddhism into his scholarly work.

The key point of the linked paper is that for a system (such as a living organism, but we might extend the idea to a sociotechnical system like an institution or any other “agent” like an AI) to be autonomous, it has to have a kind of operational closure over time–meaning not that it is closed to interaction, but that its internal states progress through some logical space–and that it must maintain its state within a domain of “viability”.

Though essentially a truism, I find it a simple way of thinking about what it means for a system to preserve itself over time. What we gain from this organic view of autonomy (Varela was a biologist) is an appreciation of the fact that an agent needs to adapt simply in order to survive, let alone to act strategically or reproduce itself.

Bourgine and Varela point out three separate adaptive systems to most living organisms:

  • Cognition. Information processing that determines the behavior of the system relative to its environment. It adapts to new stimuli and environmental conditions.
  • Genetics. Information processing that determines the overall structure of the agent. It adapts through reproduction and natural selection.
  • The Immune system. Information processing to identify invasive micro-agents that would threaten the integrity of the overall agent. It creates internal antibodies to shut down internal threats.

Sean O Nuallain has proposed that ones sense of personal self is best thought of as a kind of immune system. We establish a barrier between ourselves and the world in order to maintain a cogent and healthy sense of identity. One could argue that to have an identity at all is to have a system of identifying what is external to it and rejecting it. Compare this with psychological ideas of ego maintenance and Jungian confrontations with “the Shadow”.

At an social organizational level, we can speculate that there is still an immune function at work. Left and right wing ideologies alike have cultural “antibodies” to quickly shut down expressions of ideas that pattern match to what might be an intellectual threat. Academic disciplines have to enforce what can be said within them so that their underlying theoretical assumptions and methodological commitments are not upset. Sociotechnical “cybersecurity” may be thought of as a kind of immune system. And so on.

Perhaps the most valuable use of the “immune system” metaphor is that it identifies a mid-range level of adaptivity that can be truly subconscious, given whatever mode of “consciousness” you are inclined to point to. Social and psychological functions of rejection are in a sense a condition for higher-level cognition. At the same time, this pattern of rejection means that some information cannot be integrated materially; it must be integrated, if at all, through the narrow lens of the senses. At an organizational or societal level, individual action may be rejected because of its disruptive effect on the total system, especially if the system has official organs for accomplishing more or less the same thing.

cultural values in design

As much as I would like to put aside the problem of technology criticism and focus on my empirical work, I find myself unable to avoid the topic. Today I was discussing work with a friend and collaborator who comes from a ‘critical’ perspective. We were talking about ‘values in design’, a subject that we both care about, despite our different backgrounds.

I suggested that one way to think about values in design is to think of a number of agents and their utility functions. Their utility functions capture their values; the design of an artifact can have greater or less utility for the agents in question. They may intentionally or unintentionally design artifacts that serve some but not others. And so on.

Of course, thinking in terms of ‘utility functions’ is common among engineers, economists, cognitive scientists, rational choice theorists in political science, and elsewhere. It is shunned by the critically trained. My friend and colleague was open minded in his consideration of utility functions, but was more concerned with how cultural values might sneak into or be expressed in design.

I asked him to define a cultural value. We debated the term for some time. We reached a reasonable conclusion.

With such a consensus to work with, we began to talk about how such a concept would be applied. He brought up the example of an algorithm claimed by its creators to be objective. But, he asked, could the algorithm have a bias? Would we not expect that it would express, secretly, cultural values?

I confessed that I aspire to design and implement just such algorithms. I think it would be a fine future if we designed algorithms to fairly and objectively arbitrate our political disputes. We have good reasons to think that an algorithm could be more objective than a system of human bureaucracy. While human decision-makers are limited by the partiality of their perspective, we can build infrastructure that accesses and processes data that are beyond an individual’s comprehension. The challenge is to design the system so that it operates kindly and fairly despite its operations being beyond the scope a single person’s judgment. This will require an abstracted understanding of fairness that is not grounded in the politics of partiality.

Suppose a team of people were to design and implement such a program. On what basis would the critics–and there would inevitably be critics–accuse it of being a biased design with embedded cultural values? Besides the obvious but empty criticism that valuing unbiased results is a cultural value, why wouldn’t the reasoned process of design reduce bias?

We resumed our work peacefully.

Ethnography, philosophy, and data anonymization

The other day at BIDS I was working at my laptop when a rather wizardly looking man in a bicycle helmet asked me when The Hacker Within would be meeting. I recognized him from a chance conversation in an elevator after Anca Dragan’s ICBS talk the previous week. We had in that brief moment connected over the fact that none of the bearded men in the elevator had remembered to press the button for the ground floor. We had all been staring off into space before a young programmer with a thin mustache pointed out our error.

Engaging this amicable fellow, whom I will leave anonymous, the conversation turned naturally towards principles for life. I forget how we got onto the topic, but what I took away from the conversation was his advice: “Don’t turn your passion into your job. That’s like turning your lover into a wh***.”

Scholars in the School of Information are sometimes disparaging of the Data-Information-Knowledge-Wisdom hierarchy. Scholars, I’ve discovered, are frequently disparaging of ideas that are useful, intuitive, and pertinent to action. One cannot continue to play the Glass Bead Game if it has already been won any more than one can continue to be entertained by Tic Tac Toe once one has grasped its ineluctable logic.

We might wonder, as did Horkheimer, when the search and love of wisdom ceased to be the purpose of education. It may have come during the turn when philosophy was determined to be irrelevant, speculative or ungrounded. This perhaps coincided, in the United States, with McCarthyism. This is a question for the historians.

What is clear now is that philosophy per se is not longer considered relevant to scientific inquiry.

An ethnographer I know (who I will leave anonymous) told me the other day that the goal of Science and Technology Studies is to answer questions from philosophy of science with empirical observation. An admirable motivation for this is that philosophy of science should be grounded in the true practice of science, not in idle speculation about it. The ethnographic methods, through which observational social data is collected and then compellingly articulated, provide a kind of persuasiveness that for many far surpasses the persuasiveness of a priori logical argument, let alone authority.

And yet the authority of ethnographic writing depends always on the socially constructed role of the ethnographer, much like the authority of the physicist depends on their socially constructed role as physicists. I’d even argue that the dependence of ethnographic authority on social construction is greater than that of other kinds of scientific authority, as ethnography is so quintessentially an embedded social practice. A physicist or chemist or biologist at least in principle has nature to push back on their claims; a renegade natural scientist can as a last resort claim their authority through provision of a bomb or a cure. The mathematician or software engineer can test and verify their work through procedure. The ethnographer does not have these opportunities. Their writing will never be enough to convey the entirety of their experience. It is always partial evidence, a gesture at the unwritten.

This is not an accidental part of the ethnographic method. The practice of data anonymization, necessitated by the IRB and ethics, puts limitations on what can be said. These limitations are essential for building and maintaining the relationships of trust on which ethnographic data collection depends. The experiences of the ethnographer must always go far beyond what has been regulated as valid procedure. The information they have collected illicitly will, if they are skilled and wise, inform their judgment of what to write and what to leave out. The ethnographic text contains many layers of subtext that will be unknown to most readers. This is by design.

The philosophical text, in contrast, contains even less observational data. The text is abstracted from context. Only the logic is explicit. A naive reader will assume, then, that philosophy is a practice of logic chopping.

This is incorrect. My friend the ethnographer was correct: that ethnography is a way of answering philosophical questions empirically, through experience. However, what he missed is that philosophy is also a way of answering philosophical questions through experience. Just as in ethnographic writing, experience necessarily shapes the philosophical text. What is included, what is left out, what constellation in the cosmos of ideas is traced by the logic of the argument–these will be informed by experience, even if that experience is absent from the text itself.

One wonders: thus unhinged from empirical argument, how does a philosophical text become authoritative?

I’d offer the answer: it doesn’t. A philosophical text does not claim authority. That has been its method since Socrates.