Digifesto

Habitus Shadow

In Bourdieu’s sociological theory, habitus refers to the dispositions of taste and action that individuals acquired as a practical consequence of their place in society. Society provides a social field (a technical term for Bourdieu) of structured incentives and roles. Individuals adapt to roles rationally, but in doing so culturally differentiate themselves. This process is dialectical, hence neither strictly determined by the field nor by individual rational agency, but a co-creation of each. One’s posture, one’s preference for a certain kind of music, one’s disposition to engage in sports, one’s disposition to engage in intellectual debate, are all potentially elements of a habitus.

In Jungian psychoanalytic theory, the shadow is the aspect of personality that is unconscious and not integrated with the ego–what one consciously believes oneself to be. Often it is the instinctive or irrational part of one’s psychology. An undeveloped psyche is likely to see his or her own shadow aspect in others and judge them harshly for it; this is a form of psychological projection motivated by repression for the sake of maintaining the ego. Encounters with the shadow are difficult. Often they are experienced as the awareness or suspicion of some new information that threatens ones very sense of self. But these encounters are, for Jung, an essential part of individuation, as they are how the personality can develop a more complete consciousness of itself.

Perhaps you can see where this is going.

I propose a theoretical construct in: habitus shadow.

When an individual, situated within a social fiend, develops a habitus, they may do so with an incomplete consciousness of the reasons for their preferences and dispositions for action. An ego, a conscious rationalization, will develop; it will be reinforced by others who share its habitus. The dispositions of a habitus will include the collectively constructed ego of its members, which is itself a psychological disposition.

We would then expect that a habitus has a characteristic shadow: truths about the sociological conditions of a habitus which are not part of the conscious self-indentity or ego of that habitus.

This is another way to talk about what I have discussed elsewhere as an ideological immune reaction. If an idea or understanding is so challenging or destructive to the ego of a habitus that it calls into question the rationality of it’s very existence, then the role will be able to maintain itself only through a kind of repression/projection/exclusion. Alternatively, if the habitus can assimilate its shadow, one could see that as a form of social self-transcendence or progress.

Responsible participation in complex sociotechnical organizations circa 1977 cc @Aelkus @dj_mosfett

Many extant controversies around technology were documented in 1977 by Langdon Winner in Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought. I would go so far as to say most extant controversies, but I don’t think he does anything having to do with gender, for example.

Consider this discussion of moral education of engineers:

“The problems for moral agency created by the complexity of technical systems cast new light on contemporary calls for more ethically aware scientists and engineers. According to a very common and laudable view, part of the education of persons learning advanced scientific skills ought to be a full comprehension of the social implications of their work. Enlightened professionals should have a solid grasp of ethics relevant to their activities. But, one can ask, what good will it do to nourish this moral sensibility and then place the individual in an organizational situation that mocks the very idea of responsible conduct? To pretend that the whole matter can be settled in the quiet reflections of one’s soul while disregarding the context in which the most powerful opportunities for action are made available is a fundamental misunderstanding of the quality genuine responsibility must have.”

A few thoughts.

First, this reminds me of a conversation @Aelkus @dj_mosfett and I had the other day. The question was: who should take moral responsibility for the failures of sociotechnical organizations (conceived of as corporations running a web service technology, for example).

Second, I’ve been convinced again lately (reminded?) of the importance of context. I’ve been looking into Chaiklin and Lave’s Understanding Practice again, which is largely about how it’s important to take context into account when studying any social system that involves learning. More recently than that I’ve been looking into Nissenbaum’s contextual integrity theory. According to her theory, which is now widely used in the design and legal privacy literature, norms of information flow are justified by the purpose of the context in which they are situated. So, for example, in an ethnographic context those norms of information flow most critical for maintain trusted relationships with ones subjects are most important.

But in a corporate context, where the purpose of ones context is to maximize shareholder value, wouldn’t the norms of information flow favor those who keep the moral failures of their organization shrouded in the complexity of their machinery be perfectly justified in their actions?

I’m not seriously advocating for this view, of course. I’m just asking it rhetorically, as it seems like it’s a potential weakness in contextual integrity theory that it does not endorse the actions of, for example, corporate whistleblowers. Or is it? Are corporate whistleblowers the same as national whistleblowers? Of Wikileaks?

One way around this would be to consider contexts to be nested or overlapping, with ethics contextualize to those “spaces.” So, a corporate whistleblower would be doing something bad for the company, but good for society, assuming that there wasn’t some larger social cost to the loss of confidence in that company. (It occurs to me that in this sort of situation, perhaps threatening internally to blow the whistle unless the problem is solved would be the responsible strategy. As they say,

Making progress with the horns is permissible
Only for the purpose of punishing one’s own city.

)

Anyway, it’s a cool topic to think about, what an information theoretic account of responsibility would look like. That’s tied to autonomy. I bet it’s doable.

Bourdieu and Horkheimer; towards an economy of control

It occurred to me as I looked over my earliest notes on Horkheimer (almost a year ago!) that Bourdieu’s concept of science as being a social field that formalizes and automates knowledge is Horkheimer’s idea of hell.

The danger Horkheimer (and so many others) saw in capitalist, instrumentalized, scientific society was that it would alienate and overwhelm the individual.

It is possible that society would alienate the individual anyway, though. For example, in the household of antiquity, were slaves unalienated? The privilege of autonomy is one that has always been rare but disproportionately articulated as normal, even a right. In a sense Western Democracies and Republics exist to guarantee autonomy to their citizens. In late modern democracies, autonomy is variable depending on role in society, which is tied to (economic, social, symbolic, etc.) capital.

So maybe the horror of Horkheimer, alienated by scientific advance, is the horror of one whose capital was being devalued by science. His scholarship, his erudition, were isolated and deemed irrelevant by the formal reasoners who had come to power.

As I write this, I am painfully aware that I have spent a lot of time in graduate school reading books and writing about them when I could have been practicing programming and learning more mathematics. My aspirations are to be a scientist, and I am well aware that that requires one to mathematically formalize ones findings–or, equivalently, to program them into a computer. (It goes without saying that computer programming is formalism, is automation, and so its central role in contemporary science or ‘data science’ is almost given to it by definition. It could not have been otherwise.)

Somehow I have been provoked into investing myself in a weaker form of capital, the benefit of which is the understanding that I write here, now.

Theoretically, the point of doing all this work is to be able to identify a societal value and formalize it so that it can be capture in a technical design. Perhaps autonomy is this value. Another might call it freedom. So once again I am reminded of Simone de Beauvoir’s philosophy of science, which has been correct all along.

But perhaps de Beauvoir was naive about the political implications of technology. Science discloses possibilities, the opportunities are distributed unequally because science is socially situated. Inequality leads to more alienation, not less, for all but the scientists. Meanwhile autonomy is not universally valued–some would prefer the comforts of society, of family structure. If free from society, they would choose to reenter it. Much of ones preferences must come from habitus, no?

I am indeed reaching the limits of my ability to consider the problem discursively. The field is too multidimensional, too dynamic. The proper next step is computer simulation.

Heisenberg on technology as an out-of-control biological process

In Langdon Winner’s Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought, (1977) there is this quote from Werner Heisenberg’s Physics and Philosophy (1958):

“The enormous success of this combination of natural and technical science led to a strong preponderance of those nations or states or communities in which this kind of activity flourished, and as a natural consequence this activity had to be taken up even by those nations which by tradition would not have been inclined toward natural and technical sciences. The modern means of communication and of traffic finally completed this process of expansion of technical civilization. Undoubtedly the process has fundamentally changed the conditions of life on earth; and whether one approves of it or not, whether one calls it progress or danger, one must realize that it has gone far beyond any control through human forces. One may rather consider it as a biological process on the largest scale whereby structures active in the human organism encroach on larger parts of matter and transform it into a state suited for the increasing human population.”

Mathematics and materiality in Latour and Bourdieu’s sociology of science

Our next reading for I School Classics is Pierre Bourdieu’s Science of Science and Reflexivity (2004). In it, rock star sociologist Bourdieu does a sociology of science, but from a perspective of a sociologist who considers himself a scientist. This is a bit of an upset because so much of sociology of science has been dominated by sociologists who draw more from the humanities traditions and whose work undermines the realism the scientific fact. This realism is something Bourdieu aims to preserve while at the same time providing a realistic sociology of science.

Bourdieu’s treatment of other sociologists of science is for the most part respectful. He appears to have difficulty showing respect for Bruno Latour, who he delicately dismisses as having become significant via his rhetorical tactics while making little in the way of a substantive contribution to our understanding of the scientific process.

By saying facts are artificial in the sense of manufactured, Latour and Woolgar intimate that they are fictious, not objective, not authentic. The success of this argument results from the ‘radicality effect’, as Yves Gingras (2000) has put it, generated by the slippage suggested and encouraged by skillful use of ambiguous concepts. The strategy of moving to the limit is one of the privileged devices in pursuit of this effect … but it can lead to positions that are untenable, unsustainable, because they are simply absurd. From this comes a typical strategy, that of advancing a very radical position (of the type: scientific fact is a construction or — slippage — a fabrication, and therefore an artefact, a fiction) before beating a retreat, in the face of criticism, back to banalities, that is, to the more ordinary face of ambiguous notions like ‘construction’, etc.

In the contemporary blogosphere this critique has resurfaced through Nicholas Shackel under the name “Motte and Bailey Doctrine” [1, 2], after the Motte and Bailey castle.

A Motte and Bailey castle is a medieval system of defence in which a stone tower on a mound (the Motte) is surrounded by an area of pleasantly habitable land (the Bailey), which in turn is encompassed by some sort of a barrier, such as a ditch. Being dark and dank, the Motte is not a habitation of choice. The only reason for its existence is the desirability of the Bailey, which the combination of the Motte and ditch makes relatively easy to retain despite attack by marauders. When only lightly pressed, the ditch makes small numbers of attackers easy to defeat as they struggle across it: when heavily pressed the ditch is not defensible, and so neither is the Bailey. Rather, one retreats to the insalubrious but defensible, perhaps impregnable, Motte. Eventually the marauders give up, when one is well placed to reoccupy desirable land.

In the metaphor, the Bailey here is the radical antirealist scientific position wherein facts are fiction, the Motte is the banal recognition that science is a social process. Schackel writes that “Diagnosis of a philosophical doctrine as being a Motte and Bailey Doctrine is invariably fatal.” While this might be true in the world of philosophical scrutiny, this is unfortunately not sociologically correct. Academic traditions die hard, even long after the luminaries who started them have changed their minds.

Latour has repudiated his own radical position in “Why Has Critique Run out of Steam? From Matters of Fact to Matter of Concern” (2004), his “Tarde’s idea of quantification” (2010) offers an insightful look into the potential of quantified sociology when we have rich qualitative data sets that show us the inner connectivity of the societies. Late Latour is bullish about the role of quantification in sociology, though he believes it may require a different use of statistics than has been used traditionally in the natural sciences. Recently developed algorithmic methods for understanding network data prove this point in practice. Late Latour has more or less come around to “Big Data” scientific consensus on the matter.

This doesn’t stop Latour from being used rather differently. Consider boyd and Crawford’s “Critical Questions for Big Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon” (2012), and its use of this very paper of Latour:

‘Numbers, numbers, numbers,’ writes Latour (2010). ‘Sociology has been obsessed by the goal of becoming a quantitative science.’ Sociology has never reached this goal, in Latour’s view, because of where it draws the line between what is and is not quantifiable knowledge in the social domain.

Big Data offers the humanistic disciplines a new way to claim the status of quantitative science and objective method. It makes many more social spaces quantifiable. In reality, working with Big Data is still subjective, and what it quantifies does not necessarily have a closer claim on objective truth – particularly when considering messages from social media sites. But there remains a mistaken belief that qualitative researchers are in the business of interpreting stories and quantitative researchers are in the business of producing facts. In this way, Big Data risks reinscribing established divisions in the long running debates about scientific method and the legitimacy of social science and humanistic inquiry.

While Latour (2010) is arguing for a richly quantified sociology and has moved away from his anti-realist position about scientific results, boyd and Crawford fall back into the same confusing trap set by earlier Latour of denying scientific fact because it is based on interpretation. boyd and Crawford have indeed composed their “provocations” effectively, deploying ambiguous language that can be interpreted as a broad claim that quantitative and humanistic qualitative methods are equivalent in their level of subjectivity, but defended as the banality that there are elements of interpretation in Big Data practice.

Bourdieu’s sociology of science provides a way out of this quagmire by using his concept of the field to illuminate the scientific process. Fields are a way of understanding social structure: they define social positions or roles in terms of their power relations as they create and appropriate different forms of capital (economic, social, etc.) His main insight which he positions above Latour’s is that while a sociological investigation of lab conditions will reveal myriad interpretations, controversies, and farces that may convince the Latourian that the scientists produce fictions, an understanding of the global field of science, with its capital and incentives, will show how it produces realistic, factual results. So Bourdeiu might have answered boyd and Crawford by saying that the differences in legitimacy between quantitative science and qualitative humanism have more to do with the power relations that govern them in their totality than in the local particulars of the social interactions of which they are composed.

In conversation with a colleague who admitted to feeling disciplinary pressure to cite Latour despite his theoretical uselessness to her, I was asked whether Bourdieu has a comparable theory of materiality to Latour’s. This is a great question, since it’s Latour’s materialism that makes him so popular in Science and Technology Studies. The best representation I’ve seen of Bourdieu’s materiality so far is this passage:

“The ‘art’ of the scientist is indeed separated from the ‘art’ of the artist by two major differences: on the one hand, the importance of formalized knowledge which is mastered in the practical state, owing in particular to formalization and formularization, and on the other hand the role of the instruments, which, as Bachelard put it, are formalized knowledge turned into things. In other words, the twenty-year-old mathematician can have twenty centuries of mathematics in his mind because formalization makes it possible to acquire accumulated products of non-automatic inventions, in the form of logical automatisms that have become practical automatisms.

The same is true as regards instruments: to perform a ‘manipulation’, one uses instruments that are themselves scientific conceptions condensed and objectivated in equipment functioning as a system of constraints, and the practical mastery that Polanyi refers to is made possible by an incorporation of the constraints of the instrument so perfect that one is corporeally bound up with it, one responds to its expectations; it is the instrument that leads. One has to have incorporated much theory and many practical routines to be able to fulfil the demands of the cyclotron.”

I want to go so far as to say that in these two paragraphs we have the entire crux of the debate about scientific (and especially data scientific) method and its relationship to qualitative humanism (which Bourdieu would perhaps consider an ‘art’.) For here we see that what distinguishes the sciences is not merely that they quantify their object (Bourdieu does not use the term ‘quantification’ here at all), but rather because it revolves around cumulative mathematical formalism which guides both practice and instrument design. The scientific field aims towards this formalization because that creates knowledge as a capital that can be transferred efficiently to new scientists, enabling new discoveries. In many ways this is a familiar story from economics: labor condenses into capital, which provides new opportunities for labor.

The simple and realistic view that formal, technical knowledge is a kind of capital explains many of the phenomena we see today around data science in industry and education. It also explains the pervasiveness of the humanistic critique of science as merely another kind of humanism: because it is an advertising campaign to devalue technical capital and promote alternative forms of capital associated with the humanities as an alternative. The Bailey of desirable land is intellectual authority in an increasingly technocratic society; the Motte is banal observation of social activity.

This is not to say that the cultural capital of the humanities is not valuable in its own right. However, it does raise questions about the role of habitus in determining taste for the knowledge as art, a topic discussed in depth in Bourdieu’s Distinction. My own view is that while there is a strong temptation towards an intellectual factionalism, especially in light of the unequal distribution of capital (of various kinds) in society, this is ultimately a pernicious trend. I would prefer a united field.

late modern social epistemology round up; technical vs. hermeneutical correctness

Consider on the one hand what we might call Habermasian transcendental pragmatism, according to which knowledge can be categorized by how it addresses one of several generalized human interests:

  • The interest of power over nature or other beings, being technical knowledge
  • The interest of agreement with others for the sake of collective action, being hermeneutic knowledge
  • The interest of emancipation from present socially imposed conditions, being critical or reflexive knowledge

Consider in contrast what we might call the Luhmann or Foucault model in which knowledge is created via system autopoeisis. Luhmann talks about autopoeisis in a social system; Foucault talks about knowledge in a system of power much the same way.

It is difficult to reconcile these views. This may be what was at the heart of the Habermas-Luhmann debate. Can we parse out the problem in any way that helps reconcile these views?

First, let’s consider the Luhmann view. We might ease the tension in it by naming what we’ve called “knowledge” something like “belief”, removing the implication that the belief is true. Because indeed autopoeisis is a powerful enough process that it seems like it would preserve all kinds of myths and errors should they be important to the survival of the system in which they circulate.

This picture of knowledge, which we might call evolutionary or alternately historicist, is certainly a relativist one. At the intersection of institutions within which different partial perspectives are embedded, we are bound to see political contest.

In light of this, Habermas’s categorization of knowledge as what addresses generalized human interests can be seen as a way of identifying knowledge that transcends particular social systems. There is a normative component of this theory–knowledge should be such a thing. But there is also a descriptive component. One predicts, under Habermas’s hypothesis, that the knowledge that survives political contest at the intersection of social systems is that which addresses generalized interests.

Something I have perhaps overlooked in the past is the importance of the fact that there are multiple and sometimes contradictory general interests. One persistent difficulty in the search for truth is the conflict between what is technically correct and what is hermeneutically correct.

If a statement or theory is technically correct, then it can be reliably used by agents to predict and control the world. The objects of this prediction and control can be objects, or they can be other agents.

If a statement or theory is hermeneutically correct, then it is the reliable consensus of agents involved in a project of mutual understanding and respect. Hermeneutically correct beliefs might stress universal freedom and potential, a narrative of shared history, and a normative goal of progress against inequality. Another word for ‘hermeneutic’ might be ‘political’. Politically correct knowledges are those shared beliefs without which the members of a polity would not be able to stand each other.

In everyday discourse we can identify many examples of statements that are technically correct but hermeneutically (or politically) incorrect, and vice versa. I will not enumerate them here. In these cases, the technically correct view is identified as “offensive” because in a sense it is a defection from a voluntary social contract. Hermeneutic correctness binds together a particular social system by capturing what participants must agree upon in order for all to safely participate. For a member of that social system to assert their own agency over others, to identify ways in which others may be predicted and controlled without their consent or choice in the matter, is disrespectful. Persistent disrespect results in the ejection of the offender from the polity. (c.f. Pasquale’s distinction between “California engineers and New York quants” and “citizens”.)

A cruel consequence of these dynamics is social stratification based on the accumulation of politically forbidden technical knowledge.

We can tell this story again and again: A society is bound together by hermeneutically stable knowledge–an ideology, perhaps. Somebody ‘smart’ begins experimentation and identifies a technical truth that is hermeneutically incorrect, meaning that if the idea were to spread it would erode the consensus on which the social system depends. Perhaps the new idea degrades others by revealing that something believed to be an act of free will is, in fact, determined by nature. Perhaps the new idea is inaccessible to others because it depends on some rare capacity. In any case, it cannot be willfully consented to by the others.

The social system begins to have an immune reaction. Society has seen this kind of thing before. Historically, this idea has lead to abuse, exploitation, infamy. Those with forbidden knowledge should be shunned, distrusted, perhaps punished. Those with disrespectful technical ideas are discouraged from expressing them.

Technical knowledge thereby becomes socially isolated. Seeking out its own, it becomes concentrated. Already shunned by society, the isolated technologists put their knowledge to use. They gain advantage. Revenge is had by the nerds.

trust issues and the order of law and technology cf @FrankPasquale

I’ve cut to the last chapter of Pasquale’s The Black Box Society, “Towards an Intelligible Society.” I’m interested in where the argument goes. I see now that I’ve gotten through it that the penultimate chapter has Pasquale’s specific policy recommendations. But as I’m not just reading for policy and framing but also for tone and underlying theoretical commitments, I think it’s worth recording some first impressions before doubling back.

These are some points Pasquale makes in the concluding chapter that I wholeheartedly agree with:

  • A universal basic income would allow more people to engage in high risk activities such as the arts and entrepreneurship and more generally would be great for most people.
  • There should be publicly funded options for finance, search, and information services. A great way to provide these would be to fund the development of open source algorithms for finance and search. I’ve been into this idea for so long and it’s great to see a prominent scholar like Pasquale come to its defense.
  • Regulatory capture (or, as he elaborates following Charles Lindblom, “regulatory circularity”) is a problem. Revolving door participation in government and business makes government regulation an unreliable protector of the public interest.

There is quite a bit in the conclusion about the specifics of regulation the finance industry. There is an impressive amount of knowledge presented about this and I’ll admit much of it is over my head. I’ll probably have a better sense of it if I get to reading the chapter that is specifically about finance.

There are some things that I found bewildering or off-putting.

For example, there is a section on “Restoring Trust” that talks about how an important problem is that we don’t have enough trust in the reputation and search industries. His solution is to increase the penalties that the FTC and FCC can impose on Google and Facebook for its e.g. privacy violations. The current penalties are too trivial to be effective deterrence. But, Pasquale argues,

It is a broken enforcement model, and we have black boxes to thank for much of this. People can’t be outraged by what they can’t understand. And without some public concern about the trivial level of penalties for lawbreaking here, there are no consequences for the politicians ultimately responsible for them.

The logic here is a little mad. Pasquale is saying that people are not outraged enough by search and reputation companies to demand harsher penalties, and this is a problem because people don’t trust these companies enough. The solution is to convince people to trust these companies less–get outraged by them–in order to get them to punish the companies more.

This is a bit troubling, but makes sense based on Pasquale’s theory of regulatory circularity, which turns politics into a tug-of-war between interests:

The dynamic of circularity teaches us that there is no stable static equilibrium to be achieved between regulators and regulated. The government is either pushing industry to realize some public values in its activities (say, by respecting privacy or investing in sustainable growth), or industry is pushing regulators to promote its own interests.

There’s a simplicity to this that I distrust. It suggests for one that there are no public pressures on industry besides the government such as consumer’s buying power. A lot of Pasquale’s arguments depend on the monopolistic power of certain tech giants. But while network effects are strong, it’s not clear whether this is such a problem that consumers have no market buy in. In many cases tech giants compete with each other even when it looks like they aren’t. For example, many many people have both Facebook and Gmail accounts. Since there is somewhat redundant functionality in both, consumers can rather seemlessly allocate their time, which is tied to advertising revenue, according to which service they feel better serves them, or which is best reputationally. So social media (which is a bit like a combination of a search and reputation service) is not a monopoly. Similarly, if people have multiple search options available to them because, say, the have both Siri on their smart phone and can search Google directly, then that provides an alternative search market.

Meanwhile, government officials are also often self-interested. If there is a road to hell for industry that is to provide free web services to people to attain massive scale, then abuse economic lock-in to extract value from customers, then lobby for further rent-seeking, there is a similar road to hell in government. It starts with populist demagoguery, leads to stable government appointment, and then leverages that power for rents in status.

So, power is power. Everybody tries to get power. The question is what you do once you get it, right?

Perhaps I’m reading between the lines too much. Of course, my evaluation of the book should depend most on the concrete policy recommendations which I haven’t gotten to yet. But I find it unfortunate that what seems to be a lot of perfectly sound history and policy analysis is wrapped in a politics of professional identity that I find very counterproductive. The last paragraph of the book is:

Black box services are often wondrous to behold, but our black-box society has become dangerously unstable, unfair, and unproductive. Neither New York quants nor California engineers can deliver a sound economy or a secure society. Those are the tasks of a citizenry, which can perform its job only as well as it understands the stakes.

Implicitly, New York quants and California engineers are not citizens, to Pasquale, a law professor based in Maryland. Do all real citizens live around Washington, DC? Are they all lawyers? If the government were to start providing public information services, either by hosting them themselves or by funding open source alternatives, would he want everyone designing these open algorithms (who would be quants or engineers, I presume) to move to DC? Do citizens really need to understand the stakes in order to get this to happen? When have citizens, en masse, understood anything, really?

Based on what I’ve read so far, The Black Box Society is an expression of a lack of trust in the social and economic power associated with quantification and computing that took off in the past few dot-com booms. Since expressions of lack of trust for these industries is nothing new, one might wonder (under the influence of Foucault) how the quantified order and the critique of the quantified order manage to coexist and recreate a system of discipline that includes both and maintains its power as a complex of superficially agonistic forces. I give sincere credit to Pasquale for advocating both series income redistribution and public investment in open technology as ways of disrupting that order. But when he falls into the trap of engendering partisan distrust, he loses my confidence.

“Transactions that are too complex…to be allowed to exist.” cf @FrankPasquale

I stand corrected; my interpretation of Pasquale in my last post was too narrow. Having completed Chapter One of The Black Box Society (TBBS), Pasquale does not take the naive view that all organizational secrecy should be abolished, as I might have once. Rather, his is a more nuanced perspective.

First, Pasquale distinguishes between three “critical strategies for keeping black boxes closed”, or opacity, “[Pasquale’s] blanket term for remediable incomprehensibility”:

  • Real secrecy establishes a barrier between hidden content and unauthorized access to it.”
  • Legal secrecy obliges those privy to certain information to keep it secret”
  • Obfuscation involves deliberate attempts at concealment when secrecy has been compromised.”

Cutting to the chase by looking at the Pasquale and Bracha “Federal Search Commission” (2008) paper that a number of people have recommended to me, it appears (in my limited reading so far) that Pasquale’s position is not that opacity in general is a problem (because there are of course important uses of opacity that serve the public interest, such as confidentiality). Rather, despite these legitimate uses of opacity there is also the need for public oversight, perhaps through federal regulation. The Federal Government serves the public interest better than the imperfect market for search can provide on its own.

There is perhaps a tension between this 2008 position and what is expressed in Chapter 1 of TBBS in the section “The One-Way Mirror,” which gets I dare say a little conspiratorial about The Powers That Be. “We are increasingly ruled by what former political insider Jeff Connaughton called ‘The Blob,’ a shadowy network of actors who mobilize money and media for private gain, whether acting officially on behalf of business or of government.” Here, Pasquale appears to espouse a strong theory of regulatory capture from which, we we to insist on consistency, a Federal Search Commission would presumably not be exempt. Hence perhaps the role of TBBS in stirring popular sentiment to put political pressure on the elites of The Blob.

Though it is a digression I will note, since it is a pet peeve of mine, Pasquale’s objection to mathematized governance:

“Technocrats and managers cloak contestable value judgments in the garb of ‘science’: thus the insatiable demand for mathematical models that reframe the subtle and subjective conclusions (such as the worth of a worker, service, article, or product) as the inevitable dictate of salient, measurable data. Big data driven decisions may lead to unprecedented profits. But once we use computation not merely to exercise power over things, but also over people, we need to develop a much more robust ethical framework than ‘the Blob’ is now willing to entertain.”

That this sentiment that scientists should not be making political decisions has been articulated since at least as early as Hannah Arendt’s 1958 The Human Condition is an indication that there is nothing particular to Big Data about this anxiety. And indeed, if we think about ‘computation’ as broadly as mathematized, algorithmic thought, then its use for control over people-not-just-things has an even longer history. Lukacs’ 1923 “Reification and the Consciousness of the Proletariat” is a profound critique of Tayloristic scientific factory management that is getting close to being a hundred years old.

Perhaps a robust ethics of quantification has been in the works for some time as well.

Moving past this, by the end of Chapter 1 of TBBS Pasquale gives us the outline of the book and the true crux of his critique, which is the problem of complexity. Whether or not regulators are successful in opening the black boxes of Silicon Valley or Wall Street (or the branches of government that are complicit with Silicon Valley and Wall Street), their efforts will be in vain if what they get back from the organizations they are trying to regulate is too complex for them to understand.

Following the thrust of Pasquale’s argument, we can see that for him, complexity is the result of obfuscation. It is therefore a source of opacity, which as we have noted he has defined as “remediable incomprehensibility”. Pasquale promises to, by the end of the book, give us a game plan for creating, legally, the Intelligible Society. “Transactions that are too complex to explain to outsiders may well be too complex to be allowed to exist.”

This gets us back to the question we started with, which is whether this complexity and incomprehensibility is avoidable. Suppose we were to legislate against institutional complexity: what would that cost us?

Mathematical modeling gives us the tools we need to analyze these kinds of question. Information theory, theory of computational, and complexity theory are all foundational to the technology of telecommunications and data science. People with expertise in understanding complexity and the limitations we have of controlling it are precisely the people who make the ubiquitous algorithms which society depends on today. But this kind of theory rarely makes it into “critical” literature such as TBBS.

I’m drawn to the example of The Social Media Collective’s Critical Algorithm Studies Reading List, which lists Pasquale’s TBBS among many other works, because it opens with precisely the disciplinary gatekeeping that creates what I fear is the blind spot I’m pointing to:

This list is an attempt to collect and categorize a growing critical literature on algorithms as social concerns. The work included spans sociology, anthropology, science and technology studies, geography, communication, media studies, and legal studies, among others. Our interest in assembling this list was to catalog the emergence of “algorithms” as objects of interest for disciplines beyond mathematics, computer science, and software engineering.

As a result, our list does not contain much writing by computer scientists, nor does it cover potentially relevant work on topics such as quantification, rationalization, automation, software more generally, or big data, although these interests are well-represented in these works’ reference sections of the essays themselves.

This area is growing in size and popularity so quickly that many contributions are popping up without reference to work from disciplinary neighbors. One goal for this list is to help nascent scholars of algorithms to identify broader conversations across disciplines and to avoid reinventing the wheel or falling into analytic traps that other scholars have already identified.

This reading list is framed as a tool for scholars, which it no doubt is. But if contributors to this field of scholarship aspire, as Pasquale does, for “critical algorithms studies” to have real policy ramifications, then this disciplinary wall must fall (as I’ve argued this elsewhere).

organizational secrecy and personal privacy as false dichotomy cf @FrankPasquale

I’ve turned from page 2 to page 3 of The Black Box Society (I can be a slow reader). Pasquale sets up the dichotomy around which the drama of the hinges like so:

But while powerful businesses, financial institutions, and government agencies hide their actions behind nondisclosure agreements, “proprietary methods”, and gag rules, our own lives are increasingly open books. Everything we do online is recorded; the only questions lft are to whom the data will be available, and for how long. Anonymizing software may shield us for a little while, but who knows whether trying to hide isn’t the ultimate red flag for watchful authorities? Surveillance cameras, data brokers, sensor networks, and “supercookies” record how fast we drive, what pills we take, what books we read, what websites we visit. The law, so aggressively protective of secrecy in the world of commerce, is increasingly silent when it comes to the privacy of persons.

That incongruity is the focus of this book.

This is a rhetorically powerful paragraph and it captures a lot of trepidation people have about the power of larger organization relative to themselves.

I have been inclined to agree with this perspective for a lot of my life. I used to be the kind of person who thought Everything Should Be Open. Since then, I’ve developed what I think is a more nuanced view of transparency: some secrecy is necessary. It can be especially necessary for powerful organizations and people.

Why?

Well, it depends on the physical properties of information. (Here is an example of how a proper understanding of the mechanics of information can support the transcendent project as opposed to a merely critical project).

Any time you interact with something or somebody else in a meaningful way, you affect the state of each other in probabilistic space. That means there has been some kind of flow of information. If an organization interacts with a lot of people, it is going to absorb information about a lot of people. Recording this information as ‘data’ is something that has been done for a long time because that is what allows organizations to do intelligent things vis a vis the people they interact with. So businesses, financial institutions, and governments recording information about people is nothing new.

Pasquale suggests that this recording is a threat to our privacy, and that the secrecy of the organizations that do the recording gives them power over us. But this is surely a false dichotomy. Why? Because if an organization records information about a lot of people, and then doesn’t maintain some kind of secrecy, then that information is no longer private! To, like, everybody else. In other words, maintaining secrecy is one way of ensuring confidentiality, which is surely an important part of privacy.

I wonder what happens if we continue to read The Black Box society with this link between secrecy, confidentiality, and privacy in mind.