Digifesto

Tag: end of narrative

computational institutions as non-narrative collective action

Nils Gilman recently pointed to a book chapter that confirms the need for “official futures” in capitalist institutions.

Nils indulged me in a brief exchange that helped me better grasp at a bothersome puzzle.

There is a certain class of intellectuals that insist on the primacy of narratives as a mode of human experience. These tend to be, not too surprisingly, writers and other forms of storytellers.

There is a different class of intellectuals that insists on the primacy of statistics. Statistics does not make it easy to tell stories because it is largely about the complexity of hypotheses and our lack of confidence in them.

The narrative/statistic divide could be seen as a divide between academic disciplines. It has often been taken to be, I believe wrongly, the crux of the “technology ethics” debate.

I questioned Nils as to whether his generalization stood up to statistically driven allocation of resources; i.e., those decisions made explicitly on probabilistic judgments. He argued that in the end, management and collective action require consensus around narrative.

In other words, what keeps narratives at the center of human activity is that (a) humans are in the loop, and (b) humans are collectively in the loop.

The idea that communication is necessary for collective action is one I used to put great stock in when studying Habermas. For Habermas, consensus, and especially linguistic consensus, is how humanity moves together. Habermas contrasted this mode of knowledge aimed at consensus and collective action with technical knowledge, which is aimed at efficiency. Habermas envisioned a society ruled by communicative rationality, deliberative democracy; following this line of reasoning, this communicative rationality would need to be a narrative rationality. Even if this rationality is not universal, it might, in Habermas’s later conception of governance, be shared by a responsible elite. Lawyers and a judiciary, for example.

The puzzle that recurs again and again in my work has been the challenge of communicating how technology has become an alternative form of collective action. The claim made by some that technologists are a social “other” makes more sense if one sees them (us) as organizing around non-narrative principles of collective behavior.

It is I believe beyond serious dispute that well-constructed, statistically based collective decision-making processes perform better than many alternatives. In the field of future predictions, Phillip Tetlock’s work on superforecasting teams and prior work on expert political judgment has long stood as an empirical challenge to the supposed primacy of narrative-based forecasting. This challenge has not been taken up; it seems rather one-sided. One reason for this may be because the rationale for the effectiveness of these techniques rests ultimately in the science of statistics.

It is now common to insist that Artificial Intelligence should be seen as a sociotechnical system and not as a technological artifact. I wholeheartedly agree with this position. However, it is sometimes implied that to understand AI as a social+ system, one must understand it one narrative terms. This is an error; it would imply that the collective actions made to build an AI system and the technology itself are held together by narrative communication.

But if the whole purpose of building an AI system is to collectively act in a way that is more effective because of its facility with the nuances of probability, then the narrative lens will miss the point. The promise and threat of AI is that is delivers a different, often more effective form of collective or institution. I’ve suggested that computational institution might be the best way to refer to such a thing.

What happens if we lose the prior for sparse representations?

Noting this nice paper by Giannone et al., “Economic predictions with big data: The illusion of sparsity.” It concludes:

Summing up, strong prior beliefs favouring low-dimensional models appear to be necessary to support sparse representations. In most cases, the idea that the data are informative enough to identify sparse predictive models might be an illusion.

This is refreshing honesty.

In my experience, most disciplinary social sciences have a strong prior bias towards pithy explanatory theses. In a normal social science paper, what you want is a single research question, a single hypothesis. This thesis expresses the narrative of the paper. It’s what makes the paper compelling.

In mathematical model fitting, the term for such a simply hypothesis is a sparse predictive model. These models will have relatively few independent variables predicting the dependent variable. In machine learning, this sparsity is often accomplished by a regularization step. While generally well-motivate, regularization for sparsity can be done for reasons that are more aesthetic or reflect a stronger prior than is warranted.

A consequence of this preference for sparsity, in my opinion, is the prevalence of literature on power law distributions vs. log normal explanations. (See this note on disorganized heavy tail distributions.) A dense model on a log linear regression will predict a heavy tail dependent variable without great error. But it will be unsatisfying from the perspective of scientific explanation.

What seems to be an open question in the social sciences today is whether the culture of social science will change as a result of the robust statistical analysis of new data sets. As I’ve argued elsewhere (Benthall, 2016), if the culture does change, it will mean that narrative explanation will be less highly valued.

References

Benthall, Sebastian. “Philosophy of computational social science.” Cosmos and History: The Journal of Natural and Social Philosophy 12.2 (2016): 13-30.

Giannone, Domenico, Michele Lenza, and Giorgio E. Primiceri. “Economic predictions with big data: The illusion of sparsity.” (2017).