Digifesto

data science and the university

This is by now a familiar line of thought but it has just now struck me with clarity I wanted to jot down.

  1. Code is law, so the full weight of human inquiry should be brought to bear on software system design.
  2. (1) has been understood by “hackers” for years but has only recently been accepted by academics.
  3. (2) is due to disciplinary restrictions within the academy.
  4. (3) is due to the incentive structure of the academy.
  5. Since there are incentive structures for software development that are not available for subjects whose primary research project is writing, the institutional conditions that are best able to support software work and academic writing work are different.
  6. Software is a more precise and efficious way of communicating ideas than writing because its interpretation is guaranteed by programming language semantics.
  7. Because of (6), there is selective pressure to making software the lingua franca of scholarly work.
  8. (7) is inducing a cross-disciplinary paradigm shift in methods.
  9. (9) may induce a paradigm shift in theoretical content, or it may result in science whose contents are tailored to the efficient execution of adaptive systems. (This is not to say that such systems are necessarily atheoretic, just that they are subject to different epistemic considerations).
  10. Institutions are slow to change. That’s what makes them institutions.
  11. By (5), (7), and (9), the role of universities as the center of research is being threatened existentially.
  12. But by (1), the myriad intellectual threads currently housed in universities are necessary for software system design, or are at least potentially important.
  13. With (11) and (12), a priority is figuring out how to manage a transition to software-based scholarship without information loss.

a brief comment on feminist epistemology

One funny thing about having a blog is that I can tell when people are interested in particular posts through the site analytics. To my surprise, this post about Donna Haraway has been getting an increasing number of hits each month since I posted it. That is an indication that it has struck a chord, since steady exogenous growth like that is actually quite rare.

It is just possible that this means that people interested in feminist epistemology have been reading my blog lately. They probably have correctly guessed that I have not been the biggest fan of feminist epistemology because of concerns about bias.

But I’d like to take the opportunity to say that my friend Rachel McKinney has been recommending I read Elizabeth Anderson‘s stuff if I want to really get to know this body of theory. Since Rachel is an actual philosopher and I am an amateur who blogs about it on weekends, I respect her opinion on this a great deal.

So today I started reading through Anderson’s Stanford Encyclopedia of Philosophy article on Feminist Epistemology and I have to say I think it’s very good. I like her treatment of the situated knower. It’s also nice to learn that there are alternative feminist epistemologies to certain standpoint theories that I think are troublesome. In particular, it turns out that those standpoint theories are now considered by feminist philosophers to from a brief period in the 80’s that they’ve moved past already! Now subaltern standpoints are considered privileged in terms of discovery more than privileged in terms of justification.

This position is certainly easier to reconcile with computational methods. For example, it’s in a sense just mathematically mathematically correct if you think about it in terms of information gain from a sample. This principle appears to have been rediscovered in a way recently by the equity-in-data-science people when people talk about potential classifier error.

I’ve got some qualms about the articulation of this learning principle in the absence of a particular inquiry or decision problem because I think there’s still a subtle shift in the argumentation from logos to ethos embedded in there (I’ve been seeing things through the lens of Aristotelian rhetoric lately and it’s been surprisingly illuminating). I’m on the lookout for a concrete application of where this could apply in a technical domain, as opposed to as an articulation of a political affinity or anxiety in the language of algorithms. I’d be grateful for links in the comments.

Edit:

Wait, maybe I already built one. I am not sure if that really counts.

scale and polemic

I love a good polemic but lately I have been disappointed by polemics as a genre because they generally don’t ground themselves on data at a suitable scale.

When people try to write about a social problem, they are likely to use potent examples as a rhetorical device. Their particular ideological framing of a situation will be illustrated by compelling stories that are easy to get emotional about. This is often considered to be the hallmark of A Good Presentation, or Good Writing. Somebody will say about some group X, “Group X is known for doing bad things. Here’s an example.”

There are some problems with this approach. If there are a lot of people in Group X, then there can be a lot of variance within that group. So providing just a couple examples really doesn’t tell you about the group as a whole. In fact, this is a great way to get a biased view of Group X.

There are consequences to this kind of rhetoric. Once there’s a narrative with a compelling example illustrating it, that spreads that way of framing things as an ideology. Then, because of the well-known problem of confirmation bias, people that have been exposed to that ideology will start to see more examples of that ideology everywhere.

Add to that stereotype threat and suddenly you’ve got an explanation for why so many political issues are polarized and terrible.

Collecting more data and providing statistical summaries of populations is a really useful remedy to this. While often less motivating than a really well told story of a person’s experience, it has the benefit of being more accurate in the sense of showing the diversity of perspectives there are about something.

Unfortunately, we like to hear stories so much that we will often only tell people about statistics on large populations if they show a clear trend one way or another. People that write polemics want to be able to say, “Group X has 20% more than Group Y in some way,” and talk about why. It’s not considered an interesting result if it turns out the data is just noise, that Group X and Group Y aren’t really that different.

We also aren’t good at hearing stories about how much variance there is in data. Maybe on average Group X has 20% more than Group Y in some way. But what if these distributions are bimodal? Or if one is more varied than the other? What does that mean, narratively?

It can be hard to construct narrations that are not about what can be easily experienced in one moment but rather are about the experiences of lots of people over lots of moments. The narrative form is very constraining because it doesn’t capture the reality of phenomena of great scale and complexity. Things of great scale and complexity can be beautiful but hard to talk about. Maybe talking about them is a waste of time, because that’s not a good way to understand them.

formalizing the cultural observer

I’m taking a brief break from Horkheimer because he is so depressing and because I believe the second half of Eclipse of Reason may include new ideas that will take energy to internalize.

In the meantime, I’ve rediscovered Soren Brier’s Cybersemiotics: Why Information Is Not Enough! (2008), which has remained faithfully on my desk for months.

Brier is concerned with the possibility of meaning generally, and attempts to synthesize the positions of Pierce (recall: philosophically disliked by Horkheimer as a pragmatist), Wittgenstein (who first was an advocate of the formalization of reason and language in his Tractatus, then turned dramatically against it in his Philosophical Investigations), second-order cyberneticists like Varela and Maturana, and the social theorist Niklas Luhmann.

Brier does not make any concessions to simplicity. Rather, his approach is to begin with the simplest theories of communication (Shannon) and show where each fails to account for a more complex form of interaction between more completely defined organisms. In this way, he reveals how each simpler form of communication is the core around which a more elaborate form of meaning-making is formed. He finally arrives at a picture of meaning-making that encompasses all of reality, including that which can be scientifically understood, but one that is necessarily incomplete and an open system. Meaning is all-pervading but never all-encompassing.

One element that makes meaning more complex than simple Shannon-esque communication is the role of the observer, who is maintained semiotically through an accomplishment of self-reference through time. This observer is a product of her own contingency. The language she uses is the result of nature, AND history, AND her own lived life. There is a specificity to her words and meanings that radiates outward as she communicates, meanings that interact in cybernetic exchange with the specific meanings of other speakers/observers. Language evolves in an ecology of meaning that can only poorly be reflected back upon the speaker.

What then can be said of the cultural observer, who carefully gathers meanings, distills them, and expresses new ones conclusively? She is a cybernetic captain, steering the world in one way or another, but only the world she perceives and conceives. Perhaps this is Haraway’s cyborg, existing in time and space through a self-referential loop, reinforced by stories told again and again: “I am this, I am this, I am this.” It is by clinging to this identity that the cyborg achieves the partiality glorified by Haraway. It is also this identity that positions her as an antagonist as she must daily fight the forces of entropy that would dissolve her personality.

Built on cybernetic foundations, does anything in principle prevent the formalization and implementation of Brier’s semiotic logic? What would a cultural observer that stands betwixt all cultures, looming like a spider on the webs of communication that wrap the earth at inconceivable scale? Without the same constraints of partiality of one human observer, belonging to one culture, what could such a robot scientist see? What meaning would they make for themselves or intend?

This is not simply an issue of the interpretability of the algorithms used by such a machine. More deeply, it is the problem that these machines do not speak for themselves. They have no self-reference or identity, and so do not participate in meaning-making except instrumentally as infrastructure. This cultural observer that is in the position to observe culture in the making without the limits of human partiality for now only serves to amplify signal or dampen noise. The design is incomplete.

Horkheimer and “The Revolt of Nature”

The third chapter of Horkheimer’s Eclipse of Reason (which by the way is apparently available here as a PDF) is titled “The Revolt of Nature”.

It opens with a reiteration of the Frankfurt School story: as reason gets formalized, society gets rationalized. “Rationalized” here is in the sense that goes back at least to Lukacs’s “Reification and the Consciousness of the Proletariat” in 1923. It refers to the process of being rendered predictable, and being treated as such. It’s this formalized reason that is a technique of prediction and predictability, but which is unable to furnish an objective ethics, that is the main subject of Horkheimer’s critique.

In “The Revolt of Nature”, Horkheimer claims that as more and more of society is rationalized, the more humanity needs to conform to the rationalizing system. This happens through the labor market. Predictable technology and working conditions such as the factory make workers more interchangeable in their jobs. Thus they are more “free” in a formal sense, but at the same time have less job security and so have to conform to economic forces that make them into means and not ends in themselves.

Recall that this is written in 1947, and Lukacs wrote in 1923. In recent years we’ve read a lot about the Sharing Economy and how it leads to less job security. This is an argument that is almost a century old.

As society and humanity in it conform more and more to rational, pragmatic demands on them, the element of man that is irrational, that is nature, is not eliminated. Horkheimer is implicitly Freudian. You don’t eradicate the natural impulses. You repress them. And what is repressed must revolt.

This view runs counter to some of the ideology of the American academic system that became more popular in the late 20th century. Many ideologues reject the idea of human nature at all, arguing that all human behavior can be attributed to socialization. This view is favored especially by certain extreme progressives, who have a post-Christian ideal of eradicating sin through media criticism and scientific intervention. Steven Pinker’s The Blank Slate is an interesting elaboration and rebuttal of this view. Pinker is hated by a lot of academics because (a) he writes very popular books and (b) he makes a persuasive case against the total mutability of human nature, which is something of a sacred cow to a lot of social scientists for some reason.

I’d argue that Horkheimer would agree with Pinker that there is such a thing as human nature, since he explicitly argues that repressed human nature will revolt against dominating rationalizing technology. But because rationalization is so powerful, the revolt of nature becomes part of the overall system. It helps sustain it. Horkheimer mentions “engineered” race riots. Today we might point to the provocation of bestial, villainous hate speech and its relationship to the gossip press. Or we might point to ISIS and the justification it provides for the military-industrial complex.

I don’t want to imply I endorse this framing 100%. It is just the continuation of Frankfurt School ideas to the present day. How they match up against reality is an empirical question. But it’s worth pointing out how many of these important tropes originated.

a new kind of scientism

Thinking it over, there are a number of problems with my last post. One was the claim that the scientism addressed by Horkheimer in 1947 is the same as the scientism of today.

Scientism is a pejorative term for the belief that science defines reality and/or is a solution to all problems. It’s not in common use now, but maybe it should be among the critical thinkers of today.

Frankfurt School thinkers like Horkheimer and Habermas used “scientism” to criticize the positivists, the 20th century philosophical school that sought to reduce all science and epistemology to formal empirical methods, and to reduce all phenomena, including social phenomena, to empirical science modeled on physics.

Lots of people find this idea offensive for one reason or another. I’d argue that it’s a lot like the idea that algorithms can capture all of social reality or perform the work of scientists. In some sense, “data science” is a contemporary positivism, and the use of “algorithms” to mediate social reality depends on a positivist epistemology.

I don’t know any computer scientists that believe in the omnipotence of algorithms. I did get an invitation to this event at UC Berkeley the other day, though:

This Saturday, at [redacted], we will celebrate the first 8 years of the [redacted].

Current students, recent grads from Berkeley and Stanford, and a group of entrepreneurs from Taiwan will get together with members of the Social Data Lab. Speakers include [redacted], former Palantir financial products lead and course assistant of the [redacted]. He will reflect on how data has been driving transforming innovation. There will be break-out sessions on sign flips, on predictions for 2020, and on why big data is the new religion, and what data scientists need to learn to become the new high priests. [emphasis mine]

I suppose you could call that scientistic rhetoric, though honestly it’s so preposterous I don’t know what to think.

Though I would recommend to the critical set the term “scientism”, I’m ambivalent about whether it’s appropriate to call the contemporary emphasis on algorithms scientistic for the following reason: it might be that ‘data science’ processes are better than the procedures developed for the advancement of physics in the mid-20th century because they stand on sixty years of foundational mathematical work with modeling cognition as an important aim. Recall that the AI research program didn’t start until Chomsky took down Skinner. Horkheimer quotes Dewey commenting that until naturalist researchers were able to use their methods to understand cognition, they wouldn’t be able to develop (this is my paraphrase:) a totalizing system. But the foundational mathematics of information theory, Bayesian statistics, etc. are robust enough or could be robust enough to simply be universally intersubjectively valid. That would mean data science would stand on transcendental not socially contingent grounds.

That would open up a whole host of problems that take us even further back than Horkheimer to early modern philosophers like Kant. I don’t want to go there right now. There’s still plenty to work with in Horkheimer, and in “Conflicting panaceas” he points to one of the critical problems, which is how to reconcile lived reality in its contingency with the formal requirements of positivist or, in the contemporary data scientific case, algorithmic epistemology.

“Conflicting panaceas”; decapitation and dogmatism in cultural studies counterpublics

I’m still reading through Horkheimer’s Eclipse of Reason. It is dense writing and slow going. I’m in the middle of the second chapter, “Conflicting Panaceas”.

This chapter recognizes and then critiques a variety of intellectual stances of his contemporaries. Whereas in the first chapter Horkheimer takes aim at pragmatism, in this he concerns himself with neo-Thomism and positivism.

Neo-Thomism? Yes, that’s right. Apparently in 1947 one of the major intellectual contenders was a school of thought based on adapting the metaphysics of Saint Thomas Aquinas to modern times. This school of thought was apparently notable enough that while Horkheimer is generally happy to call out the proponents of pragmatism and positivism by name and call them business interest lapdogs, he chooses instead to address the neo-Thomists anonymously in a conciliatory footnote

This important metaphysical school includes some of the most responsible historians and writers of our day. The critical remarks here bear exclusively on the trend by which independent philosophical thought is being superseded by dogmatism.

In a nutshell, Horkheimer’s criticism of neo-Thomism is that it is that since it tries and fails to repurpose old ontologies to the new world, it can’t fulfill its own ambitions as an intellectual system through rigor without losing the theological ambitions that motivate it, the identification of goodness, power, and eternal law. Since it can’t intellectually culminate, it becomes a “dogmatism” that can be coopted disingenuously by social forces.

This is, as I understand it, the essence of Horkheimer’s criticism of everything: That for any intellectual trend or project, unless the philosophical project is allowed to continue to completion within it, it will have its brains slurped out and become zombified by an instrumentalist capitalism that threatens to devolve into devastating world war. Hence, just as neo-Thomism becomes a dogmatism because it would refute itself if it allowed its logic to proceed to completion, so too does positivism become a dogmatism when it identifies the truth with disciplinarily enforced scientific methods. Since, as Horkheimer points out in 1947, these scientific methods are social processes, this dogmatic positivism is another zombie, prone to fads and politics not tracking truth.

I’ve been struggling over the past year or so with similar anxieties about what from my vantage point are prevailing intellectual trends of 2014. Perversely, in my experience the new intellectual identities that emerged to expose scientific procedures as social processes in the 20th century (STS) and establish rhetorics of resistance (cultural studies) have been similarly decapitated, recuperated, and dogmatic. [see 1 2 3].

Are these the hauntings of straw men? This is possible. Perhaps the intellectual currents I’ve witnessed are informal expressions, not serious intellectual work. But I think there is a deeper undercurrent which has turned up as I’ve worked on a paper resulting from this conversation about publics. It hinges on the interpretation of an influential article by Fraser in which she contests Habermas’s notion of the public sphere.

In my reading, Fraser more or less maintains the ideal of the public sphere as a place of legitimacy and reconciliation. For her it is notably inequitable, it is plural not singular, the boundaries of what is public and private are in constant negotiation, etc. But its function is roughly the same as it is for Habermas.

My growing suspicion is that this is not how Fraser is used by cultural studies today. This suspicion began when Fraser was introduced to me; upon reading her work I did not find the objection implicit in the reference to her. It continued as I worked with the comments of a reviewer on a paper. It was recently confirmed while reading Chris Wisniewski’s “Digital Deliberation ?” in Critical Review, vol 25, no. 2, 2013. He writes well:

The cultural-studies scholars and critical theorists interested in diversifying participation through the Internet have made a turn away from this deliberative ideal. In an essay first published in 1990, the critical theorist Nancy Fraser (1999, 521) rejects the idealized model of bourgeois public sphere as defined by Habermas on the grounds that it is exclusionary by design. Because the bourgeois public sphere brackets hierarchies of gender, race, ethnicity, class, etc., Fraser argues, it benefits the interests of dominant groups by default through its elision of socially significant inequalities. Lacking the ability to participate in the dominant discourse, disadvantaged groups establish alternative “subaltern counterpublics”.

Since the ideal speech situation does not acknowledge the socially significant inequalities that generate these counterpublics, Fraser argues for a different goal: a model of participatory democracy in which intercultural communications across socially stratified groups occur in forums that do not elide differences but intead allow diverse multiple publics the opportunity to determine the concerns or good of the public as a whole through “discursive contestations.” Fraser approaches thes subgroups as identity publics and aruges that culture and political debate are essentially power struggles among self-interested subgroups. Fraser’s ideas are similar to those prevalent in cultural studies (see Wisneiwski 2007 and 2010), a relatively young discipline in which her work has been influential.

Fraser’s theoretical model is inconsistent with studies of democratic voting behavior, which indicate that people tend to vote sociotropically, according to a perceived collective interest, and not in facor of their own perceived self-interest (e.g., Kinder and Kiewiet 1981). The argument that so-called “mass” culture excludes the interests of dominated groups in favor of the interests of the elites loses some of its valence if culture is not a site through which self-interested groups vie for their objective interests, but is rather a forum in which democratic citizens debate what constitutes, and the best way to achieve, the collective good. Diversification of discourse ceases to be an end in itself.”

I think Wisneiwski hits the nail on the head here, a nail I’d like to drive in farther. If culture is conceived of as consisting of the contests of self-interested identity groups, as this version of cultural studies does, then it will necessarily see itself as one of many self-interested identities. Cultural studies becomes, by its own logic, a counterpublic that exists primarily to advance its own interests.

But just like neo-Thomism, this positioning decapitates cultural studies by preventing it from intellectually confronting its own limitations. No identity can survive rigorous intellectual interrogation, because all identities are based on contingency, finitude, trauma. Cultural studies adopt and repurpose historical rhetorics of liberation much like neo-Thomists adopted and repurposed historical metaphysics of Christianity. The obsolescence of these rhetorics, like the obsolescence of Thomistic metaphysics, is what makes them dangerous. The rhetoric that maintains its own subordination as a condition of its own identity can never truly liberate, it can only antagonize. Unable to intellectually realize its own purpose, it becomes purposeless and hence coopted and recuperated like other dogmatisms. In particular, it feeds into “the politicization of absolutely everything”, in the language of Ezra Klein’s spot-on analysis of GamerGate. Cultural studies is a powerful ideology because it turns culture into a field of perpetual rivalry with all the distracting drama of reality television. In so doing, it undermines deeper intellectual penetration into the structural conditions of society.

If cultural studies is the neo-Thomism of today, a dogmatist religious revival of the profound theology of the civil rights movement, perhaps it’s the theocratic invocation of ‘algorithms’ that is the new scientism. I would have more to say about it if it weren’t so similar to the old scientism.

The solution to Secular Stagnation is more gigantic stone monuments

Because I am very opinionated, I know what we should do about secular stagnation.

Secular stagnation is what economists are calling the problem of an economy that is growing incorrigibly slowly due to insufficient demand–low demand caused in part by high inequality. A consequence of this is that for the economy to maintain high levels of employment, real interest rates need to be negative. That is bad for people who have a lot of money and nothing to do with it. What, they must ask themselves in their sleepless nights, can we do with all this extra money, if not save it and earn interest?

History provides an answer for them. The great empires of the past that have had more money than they knew what to do with and lots of otherwise unemployed people built gigantic stone monuments. The Pyramids of Egypt. Angor Wat in Cambodia. Easter Island. Machu Pichu.

The great wonders of the world were all, in retrospect, enormous wastes of time and money. They also created full employment and will be considered amazing forever.

Chances like this do not come often in history.

Know-how is not interpretable so algorithms are not interpretable

I happened upon Hildreth and Kimble’s “The duality of knowledge” (2002) earlier this morning while writing this and have found it thought-provoking through to lunch.

What’s interesting is that it is (a) 12 years old, (b) a rather straightforward analysis of information technology, expert systems, ‘knowledge management’, etc. in light of solid post-Enlightenment thinking about the nature of knowledge, and (c) an anticipation of the problems of ‘interpretability’ that were a couple months ago at least an active topic of academic discussion. Or so I hear.

This is the paper’s abstract:

Knowledge Management (KM) is a field that has attracted much attention both in academic and practitioner circles. Most KM projects appear to be primarily concerned with knowledge that can be quantified and can be captured, codified and stored – an approach more deserving of the label Information Management.

Recently there has been recognition that some knowledge cannot be quantified and cannot be captured, codified or stored. However, the predominant approach to the management of this knowledge remains to try to convert it to a form that can be handled using the ‘traditional’ approach.

In this paper, we argue that this approach is flawed and some knowledge simply cannot be captured. A method is needed which recognises that knowledge resides in people: not in machines or documents. We will argue that KM is essentially about people and the earlier technology driven approaches, which failed to consider this, were bound to be limited in their success. One possible way forward is offered by Communities of Practice, which provide an environment for people to develop knowledge through interaction with others in an environment where knowledge is created nurtured and sustained.

The authors point out that Knowledge Management (KM) is an extension of the earlier program of Artificiali Intelligence, depends on a model of knowledge that maintains that knowledge can be explicitly represented and hence stored and transfered, and propose an alternative way of thinking about things based on the Communities of Practice framework.

A lot of their analysis is about the failures of “expert systems”, which is a term that has fallen out of use but means basically the same thing as the contemporary uncomputational scholarly use of ‘algorithm’. An expert system was a computer program designed to make decisions about things. Broadly speaking, a search engine is a kind of expert system. What’s changed are the particular techniques and algorithms that such systems employ, and their relationship with computing and sensing hardware.

Here’s what Hildreth and Kimble have to say about expert systems in 2002:

Viewing knowledge as a duality can help to explain the failure of some KM initiatives. When the harder aspects are abstracted in isolation the representation is incomplete: the softer aspects of knowledge must also be taken into account. Hargadon (1998) gives the example of a server holding past projects, but developers do not look there for solutions. As they put it, ‘the important knowledge is all in people’s heads’, that is the solutions on the server only represent the harder aspects of the knowledge. For a complete picture, the softer aspects are also necessary. Similarly, the expert systems of the 1980s can be seen as failing because they concentrated solely on the harder aspects of knowledge. Ignoring the softer aspects meant the picture was incomplete and the system could not be moved from the environment in which it was developed.

However, even knowledge that is ‘in people’s heads’ is not sufficient – the interactive aspect of Cook and Seely Brown’s (1999) ‘knowing’ must also be taken into account. This is one of the key aspects to the management of the softer side to knowledge.

In 2002, this kind of argument was seen as a valuable critique of artificial intelligence and the practices based on it as a paradigm. But already by 2002 this paradigm was falling away. Statistical computing, reinforcement learning, decision tree bagging, etc. were already in use at this time. These methods are “softer” in that they don’t require the “hard” concrete representations of the earlier artificial intelligence program, which I believe by that time was already refered to as “Good Old Fashioned AI” or GOFAI by a number of practicioners.

(I should note–that’s a term I learned while studying AI as an undergraduate in 2005.)

So throughout the 90’s and the 00’s, if not earlier, ‘AI’ transformed into ‘machine learning’ and become the implementation of ‘soft’ forms of knowledge. These systems are built to learn to perform a task optimally based flexibly on feedback from past performance. They are in fact the cybernetic systems imagined by Norbert Wiener.

Perplexing, then, is the contemporary problem that the models created by these machine learning algorithms are opaque to their creators. These models were created using techniques that were designed precisely to solve the problems that systems based on explicit, communicable knowledge were meant to solve.

If you accept the thesis that contemporary ‘algorithms’-driven systems are well-designed implementations of ‘soft’ knowledge systems, then you get some interesting conclusions.

First, forget about interpeting the learned models of these systems and testing them for things like social discrimination, which is apparently in vogue. The right place to focus attention is on the function being optimized. All these feedback-based systems–whether they be based on evolutionary algorithms, or convergence on local maxima, or reinforcement learning, or whatever–are designed to optimize some goal function. That goal function is the closest thing you will get to an explicit representation of the purpose of the algorithm. It may change over time, but it should be coded there explicitly.

Interestingly, this is exactly the sense of ‘purpose’ that Wiener proposed could be applied to physical systems in his landmark essay, published with Rosenbleuth and Bigelow, “Purpose, Behavior, and Teleology.” In 1943. Sly devil.

EDIT: An excellent analysis of how fairness can be represented as an explicit goal function can be found in Dwork et al. 2011.

Second, because what the algorithms is designed to optimize is generally going to be something like ‘maximize ad revenue’ and not anything particularly explicitly pernicious like ‘screw over the disadvantaged people’, this line of inquiry will raise some interesting questions about, for example, the relationship between capitalism and social justice. By “raise some interesting questions”, I mean, “reveal some uncomfortable truths everyone is already aware of”. Once it becomes clear that the whole discussion of “algorithms” and their inscrutability is just a way of talking about societal problems and entrenched political interests without talking about it, it will probably be tabled due to its political infeasibility.

That is (and I guess this is the third point) unless somebody can figure out how to explicitly define the social justice goals of the activists/advocates into a goal function that could be implemented by one of these soft-touch expert systems. That would be rad. Whether anybody would be interested in using or investing in such a system is an important open question. Not a wide open question–the answer is probably “Not really”–but just open enough to let some air onto the embers of my idealism.

Horkheimer and Wiener

[I began writing this weeks ago and never finished it. I’m posting it here in its unfinished form just because.]

I think I may be condemning myself to irrelevance by reading so many books. But as I make an effort to read up on the foundational literature of today’s major intellectual traditions, I can’t help but be impressed by the richness of their insight. Something has been lost.

I’m currently reading Norbert Wiener’s The Human Use of Human Beings (1950) and Max Horkheimer’s Eclipse of Reason (1947). The former I am reading for the Berkeley School of Information Classics reading group. Norbert Wiener was one of the foundational mathematicians of 20th century information technology, a colleague of Claude Shannon. Out of his own sense of social responsibility, he articulated his predictions for the consequences of the technology he developed in Human Use. This work was the foundation of cybernetics, an influential school of thought in the 20th century. Terrell Bynum, in his Stanford Encyclopedia of Philosophy article on “Computer and Information Ethics“, attributes to Wiener’s cybernetics the foundation of all future computer ethics. (I think that the threads go back earlier, at least through to Heidegger’s Question Concerning Technology.) It is hard to find a straight answer to the question of what happened to cybernetics?. By some reports, the artificial intelligence community cut their NSF funding in the 60’s.

Horkheimer is one of the major thinkers of the very influential Frankfurt School, the postwar social theorists at the core of intellectual critical theory. Of the Frankfurt School, perhaps the most famous in the United States is Adorno. Adorno is also the most caustic and depressed, and unfortunately much of popular critical theory now takes on his character. Horkheimer is more level-headed. Eclipse of Reason is an argument about the ways that philosophical empiricism and pragmatism became complicit in fascism. Here is an interested quotation.

It is very interesting to read them side by side. Published only a few years apart, Wiener and Horkheimer are giants of two very different intellectual traditions. There’s little reason to expect they ever communicated (a more thorough historian would know more). But each makes sweeping claims about society, language, and technology and contextualizes them in broader intellectual awareness of religion, history and science.

Horkheimer writes about how the collapse of the Enlightment project of objective reason has opened the way for a society ruled by subjective reason, which he characterizes as the reason of formal mathematics and scientific thinking that is neutral to its content. It is instrumental thinking in its purest, most rigorous form. His descriptions of it sound like gestures to what we today call “data science”–a set of mechanical techniques that we can use to analyze and classify anything, perfecting our understanding of technical probabilities towards whatever ends one likes.

I find this a more powerful critique of data science than recent paranoia about “algorithms”. It is frustrating to read something over sixty years old that covers the same ground as we are going over again today but with more composure. Mathematized reasoning about the world is an early 20th century phenomenon and automated computation a mid-20th century phenomenon. The disparities in power that result from the deployment of these tools were thoroughly discussed at the time.

But today, at least in my own intellectual climate, it’s common to hear a mention of “logic” with the rebuttal “whose logic?“. Multiculturalism and standpoint epistemology, profoundly important for sensitizing researchers to bias, are taken to an extreme the glorifies technical ignorance. If the foundation of knowledge is in ones lived experience, as these ideologies purport, and one does not understand the technical logic used so effectively by dominant identity groups, then one can dismiss technical logic as merely a cultural logic of an opposing identity group. I experience the technically competent person as the Other and cannot perceive their actions as skill but only as power and in particular power over me. Because my lived experience is my surest guide, what I experience must be so!

It is simply tragic that the education system has promoted this kind of thinking so much that it pervades even mainstream journalism. This is tragic for reasons I’ve expressed in “objectivity is powerful“. One solution is to provide more accessible accounts of the lived experience of technicality through qualitative reporting, which I have attempted in “technical work“.

But the real problem is that the kind of formal logic that is at the foundation of modern scientific thought, including its most recent manifestation ‘data science’, is at its heart perfectly abstract and so cannot be captured by accounts of observed practices or lived experience. It is reason or thought. Is it disembodied? Not exactly. But at least according to constructivist accounts of mathematical knowledge, which occupy a fortunate dialectical position in this debate, mathematical insight is built from embodied phenomenological primitives but by their psychological construction are abstract. This process makes it possible for people to learn abstract principles such as the mathematical theory of information on which so much of the contemporary telecommunications and artificial intelligence apparatus depends. These are the abstract principles with which the mathematician Norbert Wiener was so intimately familiar.

Follow

Get every new post delivered to your Inbox.

Join 976 other followers