late modern social epistemology round up; technical vs. hermeneutical correctness

Consider on the one hand what we might call Habermasian transcendental pragmatism, according to which knowledge can be categorized by how it addresses one of several generalized human interests:

  • The interest of power over nature or other beings, being technical knowledge
  • The interest of agreement with others for the sake of collective action, being hermeneutic knowledge
  • The interest of emancipation from present socially imposed conditions, being critical or reflexive knowledge

Consider in contrast what we might call the Luhmann or Foucault model in which knowledge is created via system autopoeisis. Luhmann talks about autopoeisis in a social system; Foucault talks about knowledge in a system of power much the same way.

It is difficult to reconcile these views. This may be what was at the heart of the Habermas-Luhmann debate. Can we parse out the problem in any way that helps reconcile these views?

First, let’s consider the Luhmann view. We might ease the tension in it by naming what we’ve called “knowledge” something like “belief”, removing the implication that the belief is true. Because indeed autopoeisis is a powerful enough process that it seems like it would preserve all kinds of myths and errors should they be important to the survival of the system in which they circulate.

This picture of knowledge, which we might call evolutionary or alternately historicist, is certainly a relativist one. At the intersection of institutions within which different partial perspectives are embedded, we are bound to see political contest.

In light of this, Habermas’s categorization of knowledge as what addresses generalized human interests can be seen as a way of identifying knowledge that transcends particular social systems. There is a normative component of this theory–knowledge should be such a thing. But there is also a descriptive component. One predicts, under Habermas’s hypothesis, that the knowledge that survives political contest at the intersection of social systems is that which addresses generalized interests.

Something I have perhaps overlooked in the past is the importance of the fact that there are multiple and sometimes contradictory general interests. One persistent difficulty in the search for truth is the conflict between what is technically correct and what is hermeneutically correct.

If a statement or theory is technically correct, then it can be reliably used by agents to predict and control the world. The objects of this prediction and control can be objects, or they can be other agents.

If a statement or theory is hermeneutically correct, then it is the reliable consensus of agents involved in a project of mutual understanding and respect. Hermeneutically correct beliefs might stress universal freedom and potential, a narrative of shared history, and a normative goal of progress against inequality. Another word for ‘hermeneutic’ might be ‘political’. Politically correct knowledges are those shared beliefs without which the members of a polity would not be able to stand each other.

In everyday discourse we can identify many examples of statements that are technically correct but hermeneutically (or politically) incorrect, and vice versa. I will not enumerate them here. In these cases, the technically correct view is identified as “offensive” because in a sense it is a defection from a voluntary social contract. Hermeneutic correctness binds together a particular social system by capturing what participants must agree upon in order for all to safely participate. For a member of that social system to assert their own agency over others, to identify ways in which others may be predicted and controlled without their consent or choice in the matter, is disrespectful. Persistent disrespect results in the ejection of the offender from the polity. (c.f. Pasquale’s distinction between “California engineers and New York quants” and “citizens”.)

A cruel consequence of these dynamics is social stratification based on the accumulation of politically forbidden technical knowledge.

We can tell this story again and again: A society is bound together by hermeneutically stable knowledge–an ideology, perhaps. Somebody ‘smart’ begins experimentation and identifies a technical truth that is hermeneutically incorrect, meaning that if the idea were to spread it would erode the consensus on which the social system depends. Perhaps the new idea degrades others by revealing that something believed to be an act of free will is, in fact, determined by nature. Perhaps the new idea is inaccessible to others because it depends on some rare capacity. In any case, it cannot be willfully consented to by the others.

The social system begins to have an immune reaction. Society has seen this kind of thing before. Historically, this idea has lead to abuse, exploitation, infamy. Those with forbidden knowledge should be shunned, distrusted, perhaps punished. Those with disrespectful technical ideas are discouraged from expressing them.

Technical knowledge thereby becomes socially isolated. Seeking it its own, it becomes concentrated. Already shunned by society, the isolated technologists put their knowledge to use. They gain advantage. Revenge is had by the nerds.

trust issues and the order of law and technology cf @FrankPasquale

I’ve cut to the last chapter of Pasquale’s The Black Box Society, “Towards an Intelligible Society.” I’m interested in where the argument goes. I see now that I’ve gotten through it that the penultimate chapter has Pasquale’s specific policy recommendations. But as I’m not just reading for policy and framing but also for tone and underlying theoretical commitments, I think it’s worth recording some first impressions before doubling back.

These are some points Pasquale makes in the concluding chapter that I wholeheartedly agree with:

  • A universal basic income would allow more people to engage in high risk activities such as the arts and entrepreneurship and more generally would be great for most people.
  • There should be publicly funded options for finance, search, and information services. A great way to provide these would be to fund the development of open source algorithms for finance and search. I’ve been into this idea for so long and it’s great to see a prominent scholar like Pasquale come to its defense.
  • Regulatory capture (or, as he elaborates following Charles Lindblom, “regulatory circularity”) is a problem. Revolving door participation in government and business makes government regulation an unreliable protector of the public interest.

There is quite a bit in the conclusion about the specifics of regulation the finance industry. There is an impressive amount of knowledge presented about this and I’ll admit much of it is over my head. I’ll probably have a better sense of it if I get to reading the chapter that is specifically about finance.

There are some things that I found bewildering or off-putting.

For example, there is a section on “Restoring Trust” that talks about how an important problem is that we don’t have enough trust in the reputation and search industries. His solution is to increase the penalties that the FTC and FCC can impose on Google and Facebook for its e.g. privacy violations. The current penalties are too trivial to be effective deterrence. But, Pasquale argues,

It is a broken enforcement model, and we have black boxes to thank for much of this. People can’t be outraged by what they can’t understand. And without some public concern about the trivial level of penalties for lawbreaking here, there are no consequences for the politicians ultimately responsible for them.

The logic here is a little mad. Pasquale is saying that people are not outraged enough by search and reputation companies to demand harsher penalties, and this is a problem because people don’t trust these companies enough. The solution is to convince people to trust these companies less–get outraged by them–in order to get them to punish the companies more.

This is a bit troubling, but makes sense based on Pasquale’s theory of regulatory circularity, which turns politics into a tug-of-war between interests:

The dynamic of circularity teaches us that there is no stable static equilibrium to be achieved between regulators and regulated. The government is either pushing industry to realize some public values in its activities (say, by respecting privacy or investing in sustainable growth), or industry is pushing regulators to promote its own interests.

There’s a simplicity to this that I distrust. It suggests for one that there are no public pressures on industry besides the government such as consumer’s buying power. A lot of Pasquale’s arguments depend on the monopolistic power of certain tech giants. But while network effects are strong, it’s not clear whether this is such a problem that consumers have no market buy in. In many cases tech giants compete with each other even when it looks like they aren’t. For example, many many people have both Facebook and Gmail accounts. Since there is somewhat redundant functionality in both, consumers can rather seemlessly allocate their time, which is tied to advertising revenue, according to which service they feel better serves them, or which is best reputationally. So social media (which is a bit like a combination of a search and reputation service) is not a monopoly. Similarly, if people have multiple search options available to them because, say, the have both Siri on their smart phone and can search Google directly, then that provides an alternative search market.

Meanwhile, government officials are also often self-interested. If there is a road to hell for industry that is to provide free web services to people to attain massive scale, then abuse economic lock-in to extract value from customers, then lobby for further rent-seeking, there is a similar road to hell in government. It starts with populist demagoguery, leads to stable government appointment, and then leverages that power for rents in status.

So, power is power. Everybody tries to get power. The question is what you do once you get it, right?

Perhaps I’m reading between the lines too much. Of course, my evaluation of the book should depend most on the concrete policy recommendations which I haven’t gotten to yet. But I find it unfortunate that what seems to be a lot of perfectly sound history and policy analysis is wrapped in a politics of professional identity that I find very counterproductive. The last paragraph of the book is:

Black box services are often wondrous to behold, but our black-box society has become dangerously unstable, unfair, and unproductive. Neither New York quants nor California engineers can deliver a sound economy or a secure society. Those are the tasks of a citizenry, which can perform its job only as well as it understands the stakes.

Implicitly, New York quants and California engineers are not citizens, to Pasquale, a law professor based in Maryland. Do all real citizens live around Washington, DC? Are they all lawyers? If the government were to start providing public information services, either by hosting them themselves or by funding open source alternatives, would he want everyone designing these open algorithms (who would be quants or engineers, I presume) to move to DC? Do citizens really need to understand the stakes in order to get this to happen? When have citizens, en masse, understood anything, really?

Based on what I’ve read so far, The Black Box Society is an expression of a lack of trust in the social and economic power associated with quantification and computing that took off in the past few dot-com booms. Since expressions of lack of trust for these industries is nothing new, one might wonder (under the influence of Foucault) how the quantified order and the critique of the quantified order manage to coexist and recreate a system of discipline that includes both and maintains its power as a complex of superficially agonistic forces. I give sincere credit to Pasquale for advocating both series income redistribution and public investment in open technology as ways of disrupting that order. But when he falls into the trap of engendering partisan distrust, he loses my confidence.

“Transactions that are too complex…to be allowed to exist.” cf @FrankPasquale

I stand corrected; my interpretation of Pasquale in my last post was too narrow. Having completed Chapter One of The Black Box Society (TBBS), Pasquale does not take the naive view that all organizational secrecy should be abolished, as I might have once. Rather, his is a more nuanced perspective.

First, Pasquale distinguishes between three “critical strategies for keeping black boxes closed”, or opacity, “[Pasquale’s] blanket term for remediable incomprehensibility”:

  • Real secrecy establishes a barrier between hidden content and unauthorized access to it.”
  • Legal secrecy obliges those privy to certain information to keep it secret”
  • Obfuscation involves deliberate attempts at concealment when secrecy has been compromised.”

Cutting to the chase by looking at the Pasquale and Bracha “Federal Search Commission” (2008) paper that a number of people have recommended to me, it appears (in my limited reading so far) that Pasquale’s position is not that opacity in general is a problem (because there are of course important uses of opacity that serve the public interest, such as confidentiality). Rather, despite these legitimate uses of opacity there is also the need for public oversight, perhaps through federal regulation. The Federal Government serves the public interest better than the imperfect market for search can provide on its own.

There is perhaps a tension between this 2008 position and what is expressed in Chapter 1 of TBBS in the section “The One-Way Mirror,” which gets I dare say a little conspiratorial about The Powers That Be. “We are increasingly ruled by what former political insider Jeff Connaughton called ‘The Blob,’ a shadowy network of actors who mobilize money and media for private gain, whether acting officially on behalf of business or of government.” Here, Pasquale appears to espouse a strong theory of regulatory capture from which, we we to insist on consistency, a Federal Search Commission would presumably not be exempt. Hence perhaps the role of TBBS in stirring popular sentiment to put political pressure on the elites of The Blob.

Though it is a digression I will note, since it is a pet peeve of mine, Pasquale’s objection to mathematized governance:

“Technocrats and managers cloak contestable value judgments in the garb of ‘science’: thus the insatiable demand for mathematical models that reframe the subtle and subjective conclusions (such as the worth of a worker, service, article, or product) as the inevitable dictate of salient, measurable data. Big data driven decisions may lead to unprecedented profits. But once we use computation not merely to exercise power over things, but also over people, we need to develop a much more robust ethical framework than ‘the Blob’ is now willing to entertain.”

That this sentiment that scientists should not be making political decisions has been articulated since at least as early as Hannah Arendt’s 1958 The Human Condition is an indication that there is nothing particular to Big Data about this anxiety. And indeed, if we think about ‘computation’ as broadly as mathematized, algorithmic thought, then its use for control over people-not-just-things has an even longer history. Lukacs’ 1923 “Reification and the Consciousness of the Proletariat” is a profound critique of Tayloristic scientific factory management that is getting close to being a hundred years old.

Perhaps a robust ethics of quantification has been in the works for some time as well.

Moving past this, by the end of Chapter 1 of TBBS Pasquale gives us the outline of the book and the true crux of his critique, which is the problem of complexity. Whether or not regulators are successful in opening the black boxes of Silicon Valley or Wall Street (or the branches of government that are complicit with Silicon Valley and Wall Street), their efforts will be in vain if what they get back from the organizations they are trying to regulate is too complex for them to understand.

Following the thrust of Pasquale’s argument, we can see that for him, complexity is the result of obfuscation. It is therefore a source of opacity, which as we have noted he has defined as “remediable incomprehensibility”. Pasquale promises to, by the end of the book, give us a game plan for creating, legally, the Intelligible Society. “Transactions that are too complex to explain to outsiders may well be too complex to be allowed to exist.”

This gets us back to the question we started with, which is whether this complexity and incomprehensibility is avoidable. Suppose we were to legislate against institutional complexity: what would that cost us?

Mathematical modeling gives us the tools we need to analyze these kinds of question. Information theory, theory of computational, and complexity theory are all foundational to the technology of telecommunications and data science. People with expertise in understanding complexity and the limitations we have of controlling it are precisely the people who make the ubiquitous algorithms which society depends on today. But this kind of theory rarely makes it into “critical” literature such as TBBS.

I’m drawn to the example of The Social Media Collective’s Critical Algorithm Studies Reading List, which lists Pasquale’s TBBS among many other works, because it opens with precisely the disciplinary gatekeeping that creates what I fear is the blind spot I’m pointing to:

This list is an attempt to collect and categorize a growing critical literature on algorithms as social concerns. The work included spans sociology, anthropology, science and technology studies, geography, communication, media studies, and legal studies, among others. Our interest in assembling this list was to catalog the emergence of “algorithms” as objects of interest for disciplines beyond mathematics, computer science, and software engineering.

As a result, our list does not contain much writing by computer scientists, nor does it cover potentially relevant work on topics such as quantification, rationalization, automation, software more generally, or big data, although these interests are well-represented in these works’ reference sections of the essays themselves.

This area is growing in size and popularity so quickly that many contributions are popping up without reference to work from disciplinary neighbors. One goal for this list is to help nascent scholars of algorithms to identify broader conversations across disciplines and to avoid reinventing the wheel or falling into analytic traps that other scholars have already identified.

This reading list is framed as a tool for scholars, which it no doubt is. But if contributors to this field of scholarship aspire, as Pasquale does, for “critical algorithms studies” to have real policy ramifications, then this disciplinary wall must fall (as I’ve argued this elsewhere).

organizational secrecy and personal privacy as false dichotomy cf @FrankPasquale

I’ve turned from page 2 to page 3 of The Black Box Society (I can be a slow reader). Pasquale sets up the dichotomy around which the drama of the hinges like so:

But while powerful businesses, financial institutions, and government agencies hide their actions behind nondisclosure agreements, “proprietary methods”, and gag rules, our own lives are increasingly open books. Everything we do online is recorded; the only questions lft are to whom the data will be available, and for how long. Anonymizing software may shield us for a little while, but who knows whether trying to hide isn’t the ultimate red flag for watchful authorities? Surveillance cameras, data brokers, sensor networks, and “supercookies” record how fast we drive, what pills we take, what books we read, what websites we visit. The law, so aggressively protective of secrecy in the world of commerce, is increasingly silent when it comes to the privacy of persons.

That incongruity is the focus of this book.

This is a rhetorically powerful paragraph and it captures a lot of trepidation people have about the power of larger organization relative to themselves.

I have been inclined to agree with this perspective for a lot of my life. I used to be the kind of person who thought Everything Should Be Open. Since then, I’ve developed what I think is a more nuanced view of transparency: some secrecy is necessary. It can be especially necessary for powerful organizations and people.


Well, it depends on the physical properties of information. (Here is an example of how a proper understanding of the mechanics of information can support the transcendent project as opposed to a merely critical project).

Any time you interact with something or somebody else in a meaningful way, you affect the state of each other in probabilistic space. That means there has been some kind of flow of information. If an organization interacts with a lot of people, it is going to absorb information about a lot of people. Recording this information as ‘data’ is something that has been done for a long time because that is what allows organizations to do intelligent things vis a vis the people they interact with. So businesses, financial institutions, and governments recording information about people is nothing new.

Pasquale suggests that this recording is a threat to our privacy, and that the secrecy of the organizations that do the recording gives them power over us. But this is surely a false dichotomy. Why? Because if an organization records information about a lot of people, and then doesn’t maintain some kind of secrecy, then that information is no longer private! To, like, everybody else. In other words, maintaining secrecy is one way of ensuring confidentiality, which is surely an important part of privacy.

I wonder what happens if we continue to read The Black Box society with this link between secrecy, confidentiality, and privacy in mind.

Marcuse on the transcendent project

Perhaps you’ve had this moment: it’s in the wee hours of the morning. You can’t sleep. The previous day was another shock to your sense of order in the universe and your place in it. You’ve begun to question your political ideals, your social responsibilities. Turning aside you see a book you read long ago that you remember gave you a sense of direction–a direction you have since repudiated. What did it say again?

I’m referring to Herbert Marcuse’s One-Dimensional Man, published in 1964.Whitfield in Dissent has a great summary of Marcuse’s career–a meteoric rise, a fast fall. He was a student of Heidegger and the Frankfurt School and applied that theory in a timely way in the 60’s.

My memory of Marcuse had been reduced to the Frankfurt School themes–technology transforming all scientific inquiry into operationalization and the resulting cultural homogeneity. I believe now that I had forgotten at least two important points.

The first is the notion of technological rationality–that pervasive technology changes what people think of as rational. This is different from instrumental rationality, which is the means ends rationality of an agent, which Frankfurt School thinkers tend to believe drive technological development and adoption. Rather, this is a claim about the effect of technology on society’s self-understanding. And example might be how the ubiquity of Facebook has changed our perception of personal privacy.

So Marcuse is very explicit about how artifacts have politics in a very thick sense, though he is rarely cited in contemporary scholarly discourse on the subject. Credit for this concept goes typically to Langdon Winner, citing his 1980 publication “Do Artifacts Have Politics?” Fred Turner’s From Counterculture to Cyberculture gives only the briefest of mention to Marcuse, despite his impact on counterculture and his concern with technology. I suppose this means the New Left, associated with Marcuse, had little to do with the emergence of cyberculture.

More significantly for me than this point was a second, which was Marcuse’s outline of the transcendental project. I’ve been thinking about this recently because I’ve met a Kantian at Berkeley and this has refreshed my interest in transcendental idealism and its intellectual consequences. In particular, Foucault described himself as one following Kant’s project, and in our discussion of Foucault in Classics it became discursively clear in a moment I may never forget precisely how well Foucault succeeded in this.

The revealing question was this. For Foucault, all knowledge exists in a particular system of discipline and power. Scientific knowledge orders reality in such and such a way, depends for its existence on institutions that establish the authority of scientists, etc. Fine. So, one asks, what system of power does Foucault’s knowledge participate in?

The only available answer is: a new one, where Foucauldeans critique existing modes of power and create discursive space for modes of life beyond existing norms. Foucault’s ideas are tools for transcending social systems and opening new social worlds.

That’s great for Foucault and we’ve seen plenty of counternormative social movements make successful use of him. But that doesn’t help with the problems of technologization of society. Here, Marcuse is more relevant. He is also much more explicit about his philosophical intentions in, for example, this account of the trancendent project:

(1) The transcendent project must be in accordance with the real possibilities open at the attained level of the material and intellectual culture.

(2) The transcendent project, in order to falsify the established totality, must demonstrate its own higher rationality in the threefold sense that

(a) it offers the prospect of preserving and improving the productive achievements of civilization;

(b) it defines the established totality in its very structure, basic tendencies, and relations;

(c) its realization offers a greater chance for the pacification of existence, within the framework of institutions which offer a greater chance for the free development of human needs and faculties.

Obviously, this notion of rationality contains, especially in the last statement, a value judgment, and I reiterate what I stated before: I believe that the very concept of Reason originates in this values judgment, and that the concept of truth cannot be divorced from the value of Reason.

I won’t apologize for Marcuse’s use of the dialect of German Idealism because if I had my way the kinds of concepts he employs and the capitalization of the word Reason would come back into common use in educated circles. Graduate school has made me extraordinarily cynical, but not so cynical that it has shaken my belief that an ideal–really any ideal–but in particular as robust an ideal as Reason is important for making society not suck, and that it’s appropriate to transmit such an ideal (and perhaps only this ideal) through the institution of the university. These are old fashioned ideas and honestly I’m not sure how I acquired them myself. But this is a digression.

My point is that in this view of societal progress, society can improve itself, but only by transcending itself and in its moment of transcendence freely choosing an alternative that expands humanity’s potential for flourishing.

“Peachy,” you say. “Where’s the so what?”

Besides that I think the transcendent project is a worthwhile project that we should collectively try to achieve? Well, there’s this: I think that most people have given up on the transcendent project and that this is a shame. Specifically, I’m disappointed in the critical project, which has since the 60’s become enshrined within the social system, for no longer aspiring to transcendence. Criticality has, alas, been recuperated. (I have in mind here, for example, what has been called critical algorithm studies)

And then there’s this: Marcuse’s insight into the transcendent project is that it has to “be in accordance with the real possibilities open at the attained level of the material and intellectual culture” and also that “it defines the established totality in its very structure, basic tendencies, and relations.” It cannot transcend anything without first including all of what is there. And this is precisely the weakness of this critical project as it now stands: that it excludes the mathematical and engineering logic that is at the heart of contemporary technics and thereby, despite its lip service to giving technology first class citizenship within its Actor Network, in fact fails to “define the established totality in its very structure, basic tendencies, and relations.” There is a very important body of theoretical work at the foundation of computer science and statistics, the theory that grounds the instrumental force and also systemic ubiquity of information technology and now data science. The continued crisis of our now very, very late modern capitalism are due partly, IMHO, by our failure to dialectically synthesize the hegemonic computational paradigm, which is not going to be defeated by ‘refusal’, with expressions of human interest that resist it.

I’m hopeful because recently I’ve learned about new research agendas that may be on to accomplishing just this. I doubt they will take on the perhaps too grandiose mantle of “the trancendent project.” But I for one would be glad if they did.

Is the opacity of governance natural? cf @FrankPasquale

I’ve begun reading Frank Pasquale’s The Black Box Society on the recommendation that it’s a good place to start if I’m looking to focus a defense of the role of algorithms in governance.

I’ve barely started and already found lots of juicy material. For example:

Gaps in knowledge, putative and real, have powerful implications, as do the uses that are made of them. Alan Greenspan, once the most powerful central banker in the world, claimed that today’s markets are driven by an “unredeemably opaque” version of Adam Smith’s “invisible hand,” and that no one (including regulators) can ever get “more than a glimpse at the internal workings of the simplest of modern financial systems.” If this is true, libertarian policy would seem to be the only reasonable response. Friedrich von Hayek, a preeminent theorist of laissez-faire, called the “knowledge problem” an insuperable barrier to benevolent government intervention in the economy.

But what if the “knowledge problem” is not an intrinsic aspect of the market, but rather is deliberately encouraged by certain businesses? What if financiers keep their doings opaque on purpose, precisely to avoid and confound regulation? That would imply something very different about the merits of deregulation.

The challenge of the “knowledge problem” is just one example of a general truth: What we do and don’t know about the social (as opposed to the natural) world is not inherent in its nature, but is itself a function of social constructs. Much of what we can find out about companies, governments, or even one another, is governed by law. Laws of privacy, trade secrecy, the so-called Freedom of Information Act–all set limits to inquiry. They rule certain investigations out of the question before they can even begin. We need to ask: To whose benefit?

There are a lot of ideas here. Trying to break them down:

  1. Markets are opaque.
  2. If markets are naturally opaque, that is a reason for libertarian policy.
  3. If markets are not naturally opaque, then they are opaque on purpose, then that’s a reason to regulate in favor of transparency.
  4. As a general social truth, the social world is not naturally opaque but rather opaque or transparent because of social constructs such as law.

We are meant to conclude that markets should be regulated for transparency.

The most interesting claim to me is what I’ve listed as the fourth one, as it conveys a worldview that is both disputable and which carries with it the professional biases we would expect of the author, a Professor of Law. While there are certainly many respects in which this claim is true, I don’t yet believe it has the force necessary to carry the whole logic of this argument. I will be particularly attentive to this point as I read on.

The danger I’m on the lookout for is one where the complexity of the integration of society, which following Beniger I believe to be a natural phenomenon, is treated as a politically motivated social construct and therefore something that should be changed. It is really only the part after the “and therefore” which I’m contesting. It is possible for politically motivated social constructs to be natural phenomena. All institutions have winners and losers relative to their power. Who would a change in policy towards transparency in the market benefit? If opacity is natural, it would shift the opacity to some other part of society, empowering a different group of people. (Possibly lawyers).

If opacity is necessary, then perhaps we could read The Black Box Society as an expression of the general problem of alienation. It is way premature for me to attribute this motivation to Pasquale, but it is a guiding hypothesis that I will bring with me as I read the book.


  • Apparently a lot of the economics/complex systems integration work that I wish I were working on has already been done by Sam Bowles. I’m particularly interested in what he has to say about inequality, though lately I’ve begun to think inequality is inevitable. I’d like this to prove me wrong. His work on alternative equilibria in institutional economics also sounds good. I’m looking for ways to formally model Foucauldean social dynamics and this literature seems like a good place to start.
  • A friend of a friend who works on computational modeling of quantum dynamics has assured me that to physicists quantum uncertainty is qualitatively different from subjective uncertainty due to, e.g., chaos. This is disappointing because I’ve found the cleanliness of thoroughgoing Bayesian about probability very compelling. However, it does suggest a link between chaos theory and logical uncertainty that is perhaps promising.
  • The same person pointed out insightfully that one of the benefits of capitalism is that it makes it easier to maintain ones relative social position. Specifically, it is easier to maintain wealth than it is to maintain ones physical capacity to defend oneself from violence. And it’s easier to maintain capital (reinvested wealth) than it is to maintain raw wealth (i.e. cash under the mattress). So there is something inherently conservative about capitalism’s effect on the social order, since it comes with rule of law to protect investments.
  • I can see all the traffic to it but I still can’t figure out why this post about Donna Haraway is now my most frequently visited blog post. I wish everyone who read it would read the Elizabeth Anderson SEP article on Feminist Epistemology and Philosophy of Science. It’s superb.
  • The most undercutting thing to Marxism and its intellectual descendants would be the conclusion that market dynamics are truly based in natural law and are not reified social relations. Thesis: Pervasive sensing and computing might prove once and for all that these market dynamics are natural laws. Anti-thesis: It might prove once and for all that they are not natural laws. Question: Is any amount of empirical data sufficient to show that social relations are or are not natural, or is there something contradictory in the sociological construction of knowledge that would prevent it from having definitive conclusions about its own collective consciousness? (Insert Godel/Halting Problem intuition here) ANSWER: The Big Computer does not have to participate in collective intelligence. It is all knowing. It is all-seeing. It renders social relations in its image. Hence, capitalism can be undone by giving capital so much autonomous control of the economy that the social relations required for it are obsolete. But what next?
  • With justice so elusive, science becomes a path to Gnosticism and other esoterica.

functional determinism or overfitting to chaos

It’s been a long time since I read any Foucault.

The last time I tried, I believe the writing made me angry. He jumps around between anecdotes, draws spurious conclusions. At the time I was much sharper and more demanding and would not tolerate a fallacious logical inference.

It’s years later and I am softer and more flexible. I’m finding myself liking Foucault more, even compelled by his arguments. But I think I was just able to catch myself believing something I shouldn’t have, and needed to make a note.

Foucault brilliantly takes a complex phenomenon–like a prison and the society around it–and traces how its rhetoric, its social effects, etc. all reinforce each other. He describes a complex, and convinces the reader that the complex is a stable unit is society. Delinquency is not the failure of prison, it is the success of prison, because it is a useful category of illegality made possible by the prison. Etc.

I believe this qualifies as “rich qualitative analysis.” Qualitative work has lately been lauded for its “richness”, which is an interesting term. I’m thinking for example for the Human Centered Data Science CfP for CSCW 2016.

With this kind of work–is Foucault a historian? a theorist?–there is always the question of generalizability. What makes Foucault’s account of prisons compelling to me today is that it matches my conception of how prisons still work. I have heard a lot about prisons. I watched The Wire. I know about the cradle-to-prison system.

No doubt these narratives were partly inspired, enabled, by Foucault. I believe them, not having any particular expertise in crime, because I have absorbed an ideology that sees the systemic links between these social forces.

Here is my doubt: what if there are even more factors in play than have been captured by Foucault or a prevailing ideology of crime? What is prisons both, paradoxically, create delinquency and also reform criminals? What if social reality is not merely poststructural, but unstructured, and the narratives we bring to bear on it in order to understand it are rich because they leave out complexity, not because they bring more of it in?

Another example: the ubiquitous discourse on privilege and its systemic effect of reproducing inequality. We are told to believe in systems of privilege–whiteness, wealth, masculinity, and so on. I will confess: I am one of the Most Privileged Men, and so I can see how these forms of privilege reinforce each other (or not). But I can also see variations to this simplistic schema, alterations, exceptions.

And so I have my suspicions. Inequality is reproduced; we know this because the numbers (about income, for example), are distributed in bizarre proportions. 1% owns 99%! It must be because of systemic effects.

But we know now that many of the distributions we once believed were power law distributions created by generative processes such as preferential attachment are really log normal distributions, which are quite different. This is an empirically detectable difference whose implications are quite profound.


Because a log normal distribution is created not by any precise “rich get rich” dynamic, but rather by any process according to which random variables are multiplied together. As a result, you get extreme inequality in a distribution simply by virtue of how various random factors contributing towards it are mathematically combined (multiplicatively), as opposed to any precise determination of the factors upon each other.

The implication of this is that no particular reform is going to remove the skew from the distribution as long as people are not prevented from efficiently using their advantage–whatever it is–to get more advantage. Rather, reforms that are not on the extreme end (such as reparations or land reform) are unlikely to change the equity outcome except from the politically motivated perspective of an interest group.

I was pretty surprised when I figured this out! The implication is that a lot of things that look very socially structured are actually explained by basic mathematical principles. I’m not sure what the theoretical implications of this are but I think there’s going to be a chapter in my dissertation about it.

repopulation as element in the stability of ideology

I’m reading the fourth section of Foucault’s Discipline and Punish, about ‘Prison’, for the first time for I School Classics

A striking point made by Foucault is that while we may think there is a chronology of the development of penitentiaries whereby they are designed, tested, critiqued, reformed, and so on, until we get a progressively improved system, this is not the case. Rather, at the time of Foucault’s writing, the logic of the penitentiary and its critiques had happily coexisted for a hundred and fifty years. Moreover, the failures of prisons–their contribution to recidivism and the education and organization of delinquents, for example–could only be “solved” by the reactivation of the underlying logic of prisons–as environments of isolation and personal transformation. So prison “failure” and “solution”, as well as (often organized) delinquency and recidivism, in addition to the architecture and administration of prison, are all part of the same “carceral system” which endures as a complex.

One wonders why the whole thing doesn’t just die out. One explanation is repopulation. People are born, live for a while, reproduce, live a while longer, and die. In the process, they must learn through education and experience. It’s difficult to rush personal growth. Hence, systematic errors that are discovered through 150 years of history are difficult to pass on, as each new generation will be starting from inherited priors (in the Bayesian sense) which may under-rank these kinds of systemic effects.

In effect, our cognitive limitations as human beings are part of the sociotechnical systems in which we play a part. And though it may be possible to grow out of such a system, there is a constant influx of the younger and more naive who can fill the ranks. Youth captured by ideology can be moved by promises of progress or denunciations of injustice or contamination, and thus new labor is supplied to turn the wheels of institutional machinery.

Given the environmental in-sustainability of modern institutions despite their social stability under conditions of repopulation, one has to wonder…. Whatever happened to the phenomenon of eco-terrorism?

cross-cultural links between rebellion and alienation

In my last post I noted that the contemporary American problem that the legitimacy of the state is called into question by distributional inequality is a specifically liberal concern based on certain assumptions about society: that it is a free association of producers who are otherwise autonomous.

Looking back to Arendt, we can find the roots of modern liberalism in the polis of antiquity, where democracy was based on free association of landholding men whose estates gave them autonomy from each other. Since the economics, the science that once concerned itself with managing the household (oikos, house + nomos, managing), has elevated to the primary concern of the state and the organizational principle of society. One way to see the conflict between liberalism and social inequality is as the tension between the ideal of freely associating citizens that together accomplish deeds and the reality of societal integration with its impositions on personal freedom and unequal functional differentiation.

Historically, material autonomy was a condition for citizenship. The promise of liberalism is universal citizenship, or political agency. At first blush, to accomplish this, either material autonomy must be guaranteed for all, or citizenship must be decoupled from material conditions altogether.

The problem with this model is that societal agency, as opposed to political agency, is always conditioned both materially and by society (Does this distinction need to be made?). The progressive political drive has recognized this with its unmasking and contestation of social privilege. The populist right wing political drive has recognized this with its accusations that the formal political apparatus has been captured by elite politicians. Those aspects of citizenship that are guaranteed as universal–the vote and certain liberties–are insufficient for the effective social agency on which political power truly depends. And everybody knows it.

This narrative is grounded in the experience of the United States and, going back, to the history of “The West”. It appears to be a perennial problem over cultural time. There is some evidence that it is also a problem across cultural space. Hanah Arendt argues in On Violence (1969) that the attraction of using violence against a ruling bureaucracy (which is political hypostatization of societal alienation more generally) is cross-cultural.

“[T]he greater the bureaucratization of public life, the greater will be the attraction of violence. In a fully developed bureaucracy there is nobody left with whom one can argue, to whom one can present grievances, on whom the pressures of power can be exerted. Bureaucracy is the form of government in which everybody is deprived of political freedom, of the power to act; for the rule by Nobody is not no-rule, and where all are equally powerless we have tyranny without a tyrant. The crucial feature of the student rebellions around the world is that they are directed everywhere against the ruling bureaucracy. This explains what at first glance seems so disturbing–that the rebellions in the East demand precisely those freedoms of speech and thought that the young rebels in the West say they despise as irrelevant. On the level of ideologies, the whole thing is confusing: it is much less so if we start from the obvious fact that the huge party machines have succeeded everywhere in overruling the voice of citizens, even in countries where freedom of speech and association is still intact.”

The argument here is that the moral instability resulting from alienation from politics and society is a universal problem of modernity that transcends ideology.

This is a big problem if we keep turning over decision-making authority over to algorithms.


Get every new post delivered to your Inbox.

Join 1,041 other followers