Digifesto

Reason returns to Berkeley

I’ve been struck recently by a subtle shift in messaging at UC Berkeley since Carol T. Christ has become the university’s Chancellor. Incidentally, she is the first woman chancellor of the university, with a research background in Victorian literature. I think both of these things may have something to do with the bold choice she’s made in recent announcements: the inclusion of reason as among the University’s core values.

Notably, the word has made its appearance next to three other terms that have had much more prominence in the university in recent years: equity, inclusion, and diversity. For example, in the following statements:

In “Thoughts on Charlottesville”:

We must now come together to oppose what are dangerous threats to the values we hold dear as a democracy and as a nation. Our shared belief in reason, diversity, equity, and inclusion is what animates and supports our campus community and the University’s academic mission. Now, more than ever, those values are under assault; together we must rise to their defense.

And, strikingly, this message on “Free Speech”:

Nonetheless, defending the right of free speech for those whose ideas we find offensive is not easy. It often conflicts with the values we hold as a community—tolerance, inclusion, reason and diversity. Some constitutionally-protected speech attacks the very identity of particular groups of individuals in ways that are deeply hurtful. However, the right response is not the heckler’s veto, or what some call platform denial. Call toxic speech out for what it is, don’t shout it down, for in shouting it down, you collude in the narrative that universities are not open to all speech. Respond to hate speech with more speech.

The above paragraph comes soon after this one, in which Chancellor Christ defends Free Speech on Millian philosophical grounds:

The philosophical justification underlying free speech, most powerfully articulated by John Stuart Mill in his book On Liberty, rests on two basic assumptions. The first is that truth is of such power that it will always ultimately prevail; any abridgement of argument therefore compromises the opportunity of exchanging error for truth. The second is an extreme skepticism about the right of any authority to determine which opinions are noxious or abhorrent. Once you embark on the path to censorship, you make your own speech vulnerable to it.

This slight change in messaging strikes me as fundamentally wise. In the past year, the university has been wracked by extreme passions and conflicting interests, resulting in bad press externally and I imagine discomfort internally. But this was not unprecedented; the national political bifurcation could take hold at Berkeley precisely because it had for years been, with every noble intention, emphasizing inclusivity and equity without elevating a binding agent that makes diversity meaningful and productive. This was partly due to the influence of late 20th century intellectual trends that burdened “reason” with the historical legacy of those regimes that upheld it as a virtue, which tended to be white and male. There was a time when “reason” was so associated with these powers that the term was used for the purposes of exclusion–i.e. with the claim that new entrants to political and intellectual power were being “unreasonable”.

Times have changed precisely because the exclusionary use of “reason” was a corrupt one; reason in its true sense is impersonal and transcends individual situation even as it is immanent in it. This meaning of reason would be familiar to one steeped in an older literature.

Carol Christ’s wording reflects a 21st century theme which to me gives me profound confidence in Berkeley’s future: the recognition that reason does not oppose inclusion, but rather demands it, just as scientific logic demands properly sampled data. Perhaps the new zeitgeist at Berkeley has something to do with the new Data Science undergraduate curriculum. Given the state of the world, I’m proud to see reason make a comeback.

Advertisements

Notes on Posner’s “The Economics of Privacy” (1981)

Lately my academic research focus has been privacy engineering, the designing of information processing systems that preserve privacy of their users. I have been looking the problem particularly through the lens of Contextual Integrity, a theory of privacy developed by Helen Nissenbaum (2004, 2009). According to this theory, privacy is defined as appropriate information flow, where “appropriateness” is determined relative to social spheres (such as health, education, finance, etc.) that have evolved norms based on their purpose in society.

To my knowledge most existing scholarship on Contextual Integrity is comprised by applications of a heuristic process associated with Contextual Integrity that evaluates the privacy impact of new technology. In this process, one starts by identifying a social sphere (or context, but I will use the term social sphere as I think it’s less ambiguous) and its normative structure. For example, if one is evaluating the role of a new kind of education technology, one would identify the roles of the education sphere (teachers, students, guardians of students, administrators, etc.), the norms of information flow that hold in the sphere, and the disruptions to these norms the technology is likely to cause.

I’m coming at this from a slightly different direction. I have a background in enterprise software development, data science, and social theory. My concern is with the ways that technology is now part of the way social spheres are constituted. For technology to not just address existing norms but deal adequately with how it self-referentially changes how new norms develop, we need to focus on the parts of Contextual Integrity that have heretofore been in the background: the rich social and metaethical theory of how social spheres and their normative implications form.

Because the ultimate goal is the engineering of information systems, I am leaning towards mathematical modeling methods that trade well between social scientific inquiry and technical design. Mechanism design, in particular, is a powerful framework from mathematical economics that looks at how different kinds of structures change the outcomes for actors participating in “games” that involve strategy action and information flow. While mathematical economic modeling has been heavily critiqued over the years, for example on the basis that people do not act with the unbounded rationality such models can imply, these models can be a first step and valuable in a technical context especially as they establish the limits of a system’s manipulability by non-human actors such as AI. This latter standard makes this sort of model more relevant than it has ever been.

This is my roundabout way of beginning to investigate the fascinating field of privacy economics. I am a new entrant. So I found what looks like one of the earliest highly cited articles on the subject written by the prolific and venerable Richard Posner, “The Economics of Privacy”, from 1981.

Richard Posner, from Wikipedia

Wikipedia reminds me that Posner is politically conservative, though apparently he has changed his mind recently in support of gay marriage and, since the 2008 financial crisis, the laissez faire rational choice economic model that underlies his legal theory. As I have mainly learned about privacy scholarship from more left-wing sources, it was interesting reading an article that comes from a different perspective.

Posner’s opening position is that the most economically interesting aspect of privacy is the concealment of personal information, and that this is interesting mainly because privacy is bad for market efficiency. He raises examples of employers and employees searching for each other and potential spouses searching for each other. In these cases, “efficient sorting” is facilitated by perfect information on all sides. Privacy is foremost a way of hiding disqualifying information–such as criminal records–from potential business associates and spouses, leading to a market inefficiency. I do not know why Posner does not cite Akerlof (1970) on the “market for ‘lemons'” in this article, but it seems to me that this is the economic theory most reflective of this economic argument. The essential question raised by this line of argument is whether there’s any compelling reason why the market for employees should be any different from the market for used cars.

Posner raises and dismisses each objective he can find. One objection is that employers might heavily weight factors they should not, such as mental illness, gender, or homosexuality. He claims that there’s evidence to show that people are generally rational about these things and there’s no reason to think the market can’t make these decisions efficiently despite fear of bias. I assume this point has been hotly contested from the left since the article was written.

Posner then looks at the objection that privacy provides a kind of social insurance to those with “adverse personal characteristics” who would otherwise not be hired. He doesn’t like this argument because he sees it as allocating the costs of that person’s adverse qualities to a small group that has to work with that person, rather than spreading the cost very widely across society.

Whatever one thinks about whose interests Posner seems to side with and why, it is refreshing to read an article that at the very least establishes the trade offs around privacy somewhat clearly. Yes, discrimination of many kinds is economically inefficient. We can expect the best performing companies to have progressive hiring policies because that would allow them to find the best talent. That’s especially true if there are large social biases otherwise unfairly skewing hiring.

On the other hand, the whole idea of “efficient sorting” assumes a policy-making interest that I’m pretty sure logically cannot serve the interests of everyone so sorted. It implies a somewhat brutally Darwinist stratification of personnel. It’s quite possible that this is not healthy for an economy in the long term. On the other hand, in this article Posner seems open to other redistributive measures that would compensate for opportunities lost due to revelation of personal information.

There’s an empirical part of the paper in which Posner shows that percentage of black and Hispanic populations in a state are significantly correlated with existence of state level privacy statutes relating to credit, arrest, and employment history. He tries to spin this as an explanation for privacy statutes as the result of strongly organized black and Hispanic political organizations successfully continuing to lobby in their interest on top of existing anti-discrimination laws. I would say that the article does not provide enough evidence to strongly support this causal theory. It would be a stronger argument if the regression had taken into account the racial differences in credit, arrest, and employment state by state, rather than just assuming that this connection is so strong it supports this particular interpretation of the data. However, it is interesting that this variable ways more strongly correlated with the existence of privacy statutes than several other variables of interest. It was probably my own ignorance that made me not consider how strongly privacy statutes are part of a social justice agenda, broadly speaking. Considering that disparities in credit, arrest, and employment history could well be the result of other unjust biases, privacy winds up mitigating the anti-signal that these injustices have in the employment market. In other words, it’s not hard to get from Posner’s arguments to a pro-privacy position based of all things on market efficiency.

It would be nice to model that more explicitly, if it hasn’t been done yet already.

Posner is quite bullish on privacy tort, thinking that it is generally not so offensive from an economic perspective largely because it’s about preventing misinformation.

Overall, the paper is a valuable starting point for further study in economics of privacy. Posner’s economic lens swiftly and clearly puts the trade-offs around privacy statutes in the light. It’s impressively lucid work that surely bears directly on arguments about privacy and information processing systems today.

References

Akerlof, G. A. (1970). The market for” lemons”: Quality uncertainty and the market mechanism. The quarterly journal of economics, 488-500.

Nissenbaum, H. (2004). Privacy as contextual integrity. Wash. L. Rev., 79, 119.

Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

Posner, R. A. (1981). The economics of privacy. The American economic review, 71(2), 405-409. (jstor)

Ulanowicz on thermodynamics as phenomenology

I’ve finally worked my way back to Ulanowicz, whose work so intrigued me when I first encountered it over four years ago. Reading a few of his papers on theoretical ecology gave the impression that he is both a serious scientist and onto something profound. Now I’m reading Growth and Development: Ecosystems Phenomenology (1986), which looked to be the most straightforwardly mathematical introduction to his theory of ecosystem ascendancy, which is his theory of how ecosystems can grow and develop over time.

I am eager to get to the hard stuff, where he cashes out the theory in terms of matrix multiplication representing networks of energy flows. I see several parallels to my own work and I’m hoping there are hints in here about how I can best proceed.

But first I must note a few interesting ways in which Ulanowicz positions his argument.

One important one is that he uses the word “phenomenology” in the title and in the opening argument about the nature of thermodynamics. Thermodynamics, he argues, is unlike many other more reductionist parts of physics because it draws general statistical laws on microscopically observed systems which can be reduced to many different configurations of microphenomena. This gives it both a kind of empirical weakness compared to the lower-level laws; nevertheless there is a compelling universality to its descriptive power that informs the application of so many other more specialized sciences.

This resonates with many of the themes I’ve been exploring through my graduate study. Ulanowicz never cites Francisco Varela though the latter is almost a contemporary and similarly interested in combining principles of cybernetics with the life sciences (in Varela’s case, biology). Both Ulanowicz and Varela come to conclusions about the phenomenological nature of the life sciences which are unusual in the hard sciences.

Naturally, the case has been made that the social sciences are phenomenological as well, though generally these claims are made without a hope of making a phenomenological social science as empirically rigorous as ecology, let alone biology. Nevertheless Ulanowicz does hint, as does Varela, at the possibility of extending his models to social systems.

This is of course fascinating given the difficult problem of the “macro-micro link” (see Sawyer). Ecosystem size and the properties Ulanowicz derives about them are “emergent” properties of an ecosystem; his theory is I gather an attempt at a universal description of how these properties emerge.

Somehow, Ulanowicz manages to take on these problems without ever invoking the murky language of “complex adaptive systems”. This is, I suspect, a huge benefit to his work as he seems to write strictly as a scientist and does not mystify things by using undefined language of ‘complexity’.

It is a deeper technical dive than I’ve been used to for some time, but I’m very gratefully in a more technical academic milieu now than I’ve been in for several years. More soon.

References

Ulanowicz, Robert E. “Growth and development: A phenomenological perspective.” (1986).

Differing ethnographic accounts of the effectiveness of technology

I’m curious as I compare two recent papers, one by Christin [2017] and one by Levy [2015], both about the role of technology in society. and backed by ethnographic data.

What interests me is that the two papers both examine the use of algorithms in practice, but they differ in their account of the effectiveness of the algorithms used. Christin emphasizes the way web journalists and legal professionals deliberately undermine the impact of algorithms. Levy discusses how electronic monitoring achieves central organizational control over truckers.

I’m interested in the different framings because, as Christin points out, a central point of contention in the critical scholarship around data and algorithms is the effectiveness of the technology, especially “in practice”. Implicitly if not explicitly, if the technology is not as effective as its advocates say it is, then it is overhyped and this debunking is an accomplishment of the critical and often ethnographic field.

On the other hand, if the technology is effective at control, as Levy’s article argues that it is, then it poses a much more real managerialist threat to worker’s autonomy. Identifying that this is occurring is also a serious accomplishment of the ethnographic field.

What must be recognized, however, is that these two positions contradict each other, at least as general perspectives on data-collection and algorithmic decision-making. The use of a particular technology in a particular place cannot be both so ineffective as to be overhyped and so effective as to constitute a managerialist threat. The substance of the two critiques is at odds with each other, and they call for different pragmatic responses. The former suggests a rhetorical strategy of further debunking, the latter demands a material strategy of changing working conditions.

I have seen both strategies used in critical scholarship, sometimes even in the same article, chapter, or book. I have never seen critical scholars attempt to resolve this difference between themselves using their shared assumptions and methods. I’d like to see more resolution in the ethnographic field on this point.

Correction, 8/10/17:

The apparent tension is resolved on a closer reading of Christin (2017). The argument there is that technology (in the managerialist use common to both papers) is ineffective when its intended use is resisted by those being managed by it.

That shifts the ethnographic challenge to technology away from an attack on the technical quality of the work (which is a non-starter) to accomplish what it is designed to do, but rather on the uncontroversial proposition that the effectiveness of technology depends in part on assumptions on how it will be used, and that these assumptions can be violated.

The political question of to what extent these new technologies should be adopted can then be addressed straightforwardly in terms of whether or not it is fully and properly adopted, or only partially and improperly adopted. Using language like this would be helpful in bridging technical and ethnographic fields.

References

Christin, 2017. “Algorithms in practice: Comparing journalism and criminal justice.” (link)

Levy, 2015. “The Contexts of Control: Information, Power, and Truck-Driving Work.” (link)

legitimacy in peace; legitimacy in war

I recently wrote a reflection on the reception of Habermas in the United States and argued that the lack of intellectual uptake of his later work have been a problem with politics here. Here’s what I wrote, admittedly venting a bit:

In my experience, it is very difficult to find support in academia for the view that rational consensus around democratic institutions is a worthwhile thing to study or advocate for. Identity politics and the endless contest of perspectives is much more popular among students and scholars coming out of places like UC Berkeley. In my own department, students were encouraged to read Habermas’s early work in the context of the identity politics critique, but never exposed to the later work that reacted to these critiques constructively to build a theory that was specifically about pluralism, which is what identity politics need in order to unify as a legitimate state. There’s a sense in which the whole idea that one should continue an philosophical argument to the point of constructive agreement, despite the hard work and discipline that this demands, was abandoned in favor of an ideology of intellectual diversity that discouraged scrutiny and rigor across boundaries of identity, even in the narrow sense of professional or disciplinary identity.

Tapan Parikh succinctly made the point that Habermas’s philosophy may be too idealistic to ever work out:

“I still don’t buy it without taking history, race, class and gender into account. The ledger doesn’t start at zero I’m afraid, and some interests are fundamentally antagonistic.”

This objection really is the crux of it all, isn’t it? There is a contradiction between agreement, necessary for a legitimate pluralistic state, and antagonistic interests of different social identities, especially as they are historically and presently unequal. Can there ever be a satisfactory resolution? I don’t know. Perhaps the dialectical method will get us somewhere. (This is a blog after all; we can experiment here).

But first, a note on intellectual history, as part of the fantasy of this argument is that intellectual history matters for actual political outcomes. When discussing the origins of contemporary German political theory, we should acknowledge that post-War Germany has been profoundly interested in peace as it has experienced the worst of war. The roots of German theories of peace are in Immanual Kant’s work on “perpetual peace”, the hypothetical situation in which states are no longer at way. He wrote an essay about it in 1795, which by the way begins with this wonderful preface:

PERPETUAL PEACE

Whether this satirical inscription on a Dutch innkeeper’s sign upon which a burial ground was painted had for its object mankind in general, or the rulers of states in particular, who are insatiable of war, or merely the philosophers who dream this sweet dream, it is not for us to decide. But one condition the author of this essay wishes to lay down. The practical politician assumes the attitude of looking down with great self-satisfaction on the political theorist as a pedant whose empty ideas in no way threaten the security of the state, inasmuch as the state must proceed on empirical principles; so the theorist is allowed to play his game without interference from the worldly-wise statesman. Such being his attitude, the practical politician–and this is the condition I make–should at least act consistently in the case of a conflict and not suspect some danger to the state in the political theorist’s opinions which are ventured and publicly expressed without any ulterior purpose. By this clausula salvatoria the author desires formally and emphatically to deprecate herewith any malevolent interpretation which might be placed on his words.

When the old masters are dismissed as being irrelevant or dense, it denies them the credit for being very clever.

That said, I haven’t read this essay yet! But I have a somewhat informed hunch that more contemporary work that deals with the problems it raises directly make good headway on problem of political unity. For example, this article by Bennington (2012) “Kant’s Open Secret” is good and relevant to discussions of technical design and algorithmic governance. Cederman, who has been discussed here before, builds a computational simulation of peace inspired by Kant.

Here’s what I can sketch out, perhaps ignorantly. What’s at stake is whether antagonistic actors can resolve their differences and maintain peace. The proposed mechanism for this peace is some form of federated democracy. So to paint a picture: what I think Habermas is after is a theory of how governments can be legitimate in peace. What that requires, in his view, is some form of collective deliberation where actors put aside their differences and agree on some rules: the law.

What about when race and class interests are, as Parikh suggests, “fundamentally antagonistic”, and the unequal ledger of history gives cause for grievances?

Well, all too often, these are the conditions for war.

In the context of this discussion, which started with a concern about the legitimacy of states and especially the United States, it struck me that there’s quite a difference between how states legitimize themselves at peace versus how they legitimize themselves while at war.

War, in essence, allows some actors in the state to ignore the interests of other actors. There’s no need for discursive, democratic, cosmopolitan balancing of interests. What’s required is that an alliance of interests maintain the necessary power over rivals to win the war. War legitimizes autocracy and deals with dissent by getting rid of it rather than absorbing and internalizing it. Almost by definition, wars challenge the boundaries of states and the way underlying populations legitimize them.

So to answer Parikh, the alternative to peaceful rule of law is war. And there certainly have been serious race wars and class wars. As an example, last night I went to an art exhibit at the Brooklyn Museum entitled “The Legacy of Lynching: Confronting Racial Terror in America”. The phrase “racial terror” is notable because of how it positions racist lynching as a form of terrorism, which we have been taught to treat as the activity of rogue, non-state actors threatening national security. This is deliberate, as it frames black citizens as in need of national protection from white terrorists who are in a sense at war with them. Compare and contrast this with right-wing calls for “securing our borders” from allegedly dangerous immigrants, and you can see how both “left” and “right” wing political organizations in the United States today are legitimized in part by the rhetoric of war, as opposed to the rhetoric of peace.

To take a cynical view of the current political situation in the United States, which may be the most realistic view, the problem appears to be that we have a two party system in which the two parties are essentially at war, whether rhetorically or in terms of their actions in Congress. The rhetoric of the current president has made this uncomfortable reality explicit, but it is not a new state of affairs. Rather, one of the main talking points in the previous administration and the last election was the insistence by the Democratic leadership that the United States is a democracy that is at peace with itself, and so cooperation across party lines was a sensible position to take. The efforts by the present administration and Republican leadership to dismantle anything of the prior administration’s legacy make the state of war all too apparent.

I don’t mean “war” in the sense of open violence, of course. I mean it in the sense of defection and disregard for the interests of those outside of ones political alliance. The whole question of whether and how foreign influence in the election should be considered is dependent in part on whether one sees the contest between political parties in the United States as warfare or not. It is natural for different sides in a war to seek foreign allies, even and almost especially if they are engaged in civil war or regime change. The American Revolutionary was backed by the French. The Bulshevik Revolution in Russia was backed by Germany. That’s just how these things go.

As I write this, I become convinced that this is really what it comes in the United States today. There are “two Americas”. To the extent that there is stability, it’s not a state of peace, it’s a state of equilibrium or gridlock.

The meaning of gridlock in governance

I’ve been so intrigued by this article, “Dems Can Abandon the Center — Because the Center Doesn’t Exist”, by Eric Levitz in NY Mag. The gist of the article is that most policies that we think of as “centrist” are actually very unrepresentative of the U.S. population’s median attitude on any particular subject, and are held only by a small minority that Levitz associates with former Mayor Bloomberg of New York City. It’s a great read and cites much more significant research on the subject.

One cool thing the article provides is this nice graphic showing the current political spectrum in the U.S.:

The U.S. political spectrum , from Levitz, 2017.

In comparison to that, this blog post is your usual ramble of no consequence.

Suppose there’s an organization whose governing body doesn’t accomplish anything, despite being controversial, well-publicized, and apparently not performing satisfactorily. What does that mean?

From an outside position (somebody being governed by such a body), what is means is sustained dissatisfaction and the perception that the governing body is dys- or non- functional. This spurs the dissatisfied party to invest resources or take action to change the situation.

However, if the governing body is responsive to the many and conflicting interests of the governed, the stasis of the government could mean one of at least two things.

One thing it could mean is that the mechanism through which the government changes is broken.

Another thing it could mean is that the mechanism through which the government changes is working, and the state of governance reflects the equilibrium of the powers the contest for control of the government.

The latter view is not a politically exciting view and indeed it is politically self-defeating for whoever holds it. If we see government as something responding to the activity of many interests, mediating between them and somehow achieving their collective agenda, then the problem with seeing a government in gridlock as having achieved a “happy” equilibrium, or a “correct” view, is that it discourages partisan or interested engagement. If one side stops participating in the (expensive, exhausting) arm wrestle, then the other side gains ground.

On the other hand, the stasis should not in itself be considered cause for alarm, apart from the dissatisfaction resulting from ones particular perspective on the total system.

Another angle on this is that from every point in the political spectrum, and especially those points at the extremes, the procedural mechanisms of government are going to look broken because they don’t result in satisfying outcomes. (Consider the last election, where both sides argued that the system was rigged when they thought they were losing or had lost.) But, of course, these mechanisms are always already part of the governance system itself and subject to being governed by it, so pragmatically one will approve of them just in so far as it gives ones own position influence over outcomes (here I’m assuming strict proceduralism are somewhere on the multidimensional political spectrum themselves and is motivated by e.g. the appeal of the stability or legitimacy in some sense).

Habermas seems quaint right now, but shouldn’t

By chance I was looking up Habermas’s later philosophical work today, like Between Facts and Norms (1992), which has been said to be the culmination of the project he began with The Structural Transformation of the Public Sphere in 1962. In it, he argues that the law is what gives pluralistic states their legitimacy, because the law enshrines the consent of the governed. Power cannot legitimize itself; democratic law is the foundation for the legitimate state.

Habermas’s later work is widely respected in the European Union, which by and large has functioning pluralistic democratic states. Habermas emerged from the Frankfurt School to become a theorist of modern liberalism and was good at it. While it is an empirical question how much education in political theory is tied to the legitimacy and stability of the state, anecdotally we can say that Habermas is a successful theorist and the German-led European Union is, presently, a successful government. For the purposes of this post, let’s assume that this is at least in part due to the fact that citizens are convinced, through the education system, of the legitimacy of their form of government.

In the United States, something different happened. Habermas’s earlier work (such as the The Structural Transformation of the Public Sphere) was introduced to United States intellectuals through a critical lens. Craig Calhoun, for example, argued in 1992 that the politics of identity was more relevant or significant than the politics of deliberation and democratic consensus.

That was over 25 years ago, and that moment was influential in the way political thought has unfolded in Europe and the United States. In my experience, it is very difficult to find support in academia for the view that rational consensus around democratic institutions is a worthwhile thing to study or advocate for. Identity politics and the endless contest of perspectives is much more popular among students and scholars coming out of places like UC Berkeley. In my own department, students were encouraged to read Habermas’s early work in the context of the identity politics critique, but never exposed to the later work that reacted to these critiques constructively to build a theory that was specifically about pluralism, which is what political identities need in order to unify as a legitimate state. There’s a sense in which the whole idea that one should continue a philosophical argument to the point of constructive agreement, despite the hard work and discipline that this demands, was abandoned in favor of an ideology of intellectual diversity that discouraged scrutiny and rigor across boundaries of identity, even in the narrow sense of professional or disciplinary identity.

The problem with this approach to intellectualism is that it is fractious and undermines itself. When these qualities are taken as intellectual virtues, it is no wonder that boorish overconfidence can take advantage of it in an open contest. And indeed the political class in the United States today has been undermined by its inability to justify its own power and institutions in anything but the fragmented arguments of identity politics.

It is a sad state of affairs. I can’t help but feel my generation is intellectually ill-equipped to respond to the very prominent challenges to the legitimacy of the state that are being leveled at it every day. Not to put too fine a point on it, I blame the intellectual laziness of American critical theory and its inability to absorb the insights of Habermas’s later theoretical work.

Addendum 8/7/17a:

It has come to my attention that this post is receiving a relatively large amount of traffic. This seems to happen when I hit a nerve, specifically when I recommend Habermas over identitarianism in the context of UC Berkeley. Go figure. I respectfully ask for comments from any readers. Some have already helped me further my thinking on this subject. Also, I am aware that a Wikipedia link is not the best way to spread understanding of Habermas’s later political theory. I can recommend this book review (Chriss, 1998) of Between Facts and Norms as well as the Habermas entry in the Stanford Encyclopedia of Philosophy which includes a section specifically on Habermasian cosmopolitanism, which seems relevant to the particular situation today.

Addendum 8/7/17b:

I may have guessed wrong. The recent traffic has come from Reddit. Welcome, Redditors!

 

Propaganda cyberwar: the new normal?

Reuters reports on the Washington Post’s report, citing U.S. intelligence officials, that the UAE arranged for hacking of Qatar government sites posting “fiery but false” quotes from Qatar’s emir. This was used to justify Saudi Arabia, the UAE, Egypt, and Bahrain to cut diplomatic and transport ties with Qatar.

Qatar says the quotes from the emir are fake, posted by hackers. U.S. intelligence officials now say (to the Post) that they have information about UAE discussing the hacks before they occur.

UAE denies the hacks, saying the reports of them are false, and argues that what is politically relevant is Qatar’s Islamist activities.

What a mess.

One can draw a comparison between these happenings in the Middle East and the U.S.’s Russiagate.

The comparison is difficult because any attempt to summarize what is going on with Russiagate runs into the difficulty of aligning with the narrative of one party or another who is presently battling for the ascendancy of their interpretation. But for clarity let me say that by Russiagate I mean the complex of allegations and counterclaims including: that the Russian government, or some Russians who were not associated with the government, or somebody else hacked the DNC and leaked their emails to influence the 2016 election (or its perceived legitimacy); that the Russian government (or maybe somebody else…) prop up alt-right media bots to spread “fake news” to swing voters; that swing voters were identified through the hacking of election records; that some or all of these allegations are false and promoted by politicized media outlets; that if the allegations are true, their impact on the outcome of the 2016 election is insufficient to have changed the outcome (hence not delegitimizing the outcome); the diplomatic spat over recreational compounds used by Russians in the U.S. and by the U.S. in Russia that is now based on the fact that the outgoing administration wanted to reprimand Russia for alleged hacks that allegedly led to its party’s loss of control of the government….

Propaganda

It is dizzying. In both the Qatari and U.S. cases, without very privileged inside knowledge we are left with vague and uncertain impressions of a new condition:

  • the relentless rate at which “new developments” in these stories is made available or recapitulated or commented on
  • the weakness with which they are confirmed or denied (because they are due to anonymous officials or unaccountable leaks)
  • our dependence on trusted authorities for our understanding of the problem when that trust is constantly being eroded
  • the variety of positions taken on any particular event, and the accessibility of these diverse views

Is any of this new? Maybe it’s fair to say it’s “increasing”, as the Internet has continuously inflated the speed and variety and scale of everything in the media, or seemed to.

I have no wish to recapitulate the breathless hyperbole about how media is changing “online”; this panting has been going on continuously for fifteen years at least. But recently I did see what seemed like a new insight among the broader discussion. Once, we were warned against the dangers of filter bubbles, the technologically reinforced perspectives we take when social media and search engines are trained on our preferences. Buzzfeed admirably tried to design a feature to get people Out of Their Bubble, but that got an insightful reaction from Rachel Haser:

In my experience, people understand that other opinions exist, and what the opinions are. What people don’t understand is where the opinions come from, and they don’t care to find out for themselves.

In other words: it is not hard for somebody to get out of their own bubble. Somebody’s else’s opinion is just a click or a search away. Among the narrow dichotomies of the U.S.’s political field, I’m constantly being told by the left-wing media who the right-wing pundits are and what they are saying, and why they are ridiculous. The right-wing media is constantly reporting on what left-wing people are doing and why they are ridiculous. If I ever want to verify for myself I can simply watch a video or read and article from a different point of view.

None of this access to alternative information will change my mind because my habitus is already set by my life circumstances and offline social and institutional relationships. The semiotic environment does not determine my perspective; the economic environment does. What the semiotic environment provides is, one way or another, an elaborate system of propaganda which reflects the possible practical and political alliances that are available for the deployment of capital. Most of what is said in “the media” is true; most of what is said in “the media” is spun; for the purpose of this post and to distinguish it from responsible scientific research or reporting of “just the facts”, which does happen (!), I will refer to it generically as propaganda.

Propaganda is obviously not new. Propaganda on the Internet is as new as the Internet. As the Internet expands (via smartphones and “things”), so too does propaganda. This is one part of the story here.

The second part of the story is all the hacks.

Hacks

What are hacks? Technically, a hack can be many different kinds of interventions into a (socio)technical system that creates behavior unexpected by the designer or owner of the system. It is a use or appropriation by somebody (the hacker) of somebody else’s technology, for the former’s advantage. Some example things that hacks can accomplish include: taking otherwise secret data, modifying data, and causing computers or networks to break down.

“CIA”, but Randall Munroe

There are interesting reasons why hacks have special social and political relevance. One important thing about computer hacking is that it requires technical expertise to understand how it works. This puts the analysis of a hack, and especially the attribution of the hack to some actor, in the hands of specialists. In this sense, “solving” a hack is like “solving” a conventional crime. It requires forensic experts, detectives who understand the motivation of potential suspects, and so on.

Another thing about hacks over the Internet is that they can come from “anywhere”, because Internet. This makes it harder to find hackers and also makes hacks convenient tools for transnational action. It has been argued that as the costs of physical violent war increase with an integrated global economy, the use of cyberwar as a softer alternative will rise.

In the cases described at the beginning of this post, hacks play many different roles:

  • a form of transgression, requiring apology, redress, or retaliation
  • a kind of communication, sending a message (perhaps true, or perhaps false) to an audience
  • the referent of communication, what is being discussed, especially with respect to its attribution (which is necessary for apology, redress, retaliation)

The difficulty with reporting about hacks, at least as far as reporting to the nonexpert public goes, is that every hack raises the specter of uncertainty about where it came from, whether it was as significant as the reporters say, whether the suspects have been framed, and so on.

If a propaganda war is a fire, cyberwar throws gasoline on the flame, because all the political complexity of the media can fracture the narrative around each hack until it too goes up in meaningless postmodern smoke.

Skooling?

I am including, by the way, the use of bots to promote content in social media as a “hack”. I’m blending slightly two meanings of “hack”: the more benign “MIT” sense of hack as a creative technical solution to a problem and the more specific sense of one who obviates computer security. Since the latter sense of “hack” has expanded to include social engineering efforts such as phishing, the automated influence of social media to present a false or skewed narrative as true seems to also fit here.

I have to say that this sort of media hacking–creating bots to spread “fake news” and so on–doesn’t have a succinct name yet, I propose “skooling” or “sk00ling”, since

  • it’s a phrase that means something similar to “pwning”/”owning”
  • the activity is like “phishing” in the sense that it is automated social engineering, but en masse (i.e. a school of fish)
  • the point of the hack is to “teaching” people something (i.e. some news or rumor), so to speak.

It turns out that this sort of media hacking isn’t just the bailiwick of shadowing intelligence agencies and organized cybercriminals. Run-of-the-mill public relations firms like Bell Potinger can do it. NatReferencesurally this is not considered on par with computer security crime, though there is a sense in which it is a kind of computer mediated fraud.

Putting it all together, we can imagine a sophisticated form of propaganda cyberwar campaign that goes something like this: an attacker collects data to identify about targets vulnerable to persuasion via hacks and other ways of collecting publicly or commercially available personal data. It does its best to cover its tracks to get plausible deniability. Then they skool the targets to create the desired effect. The skooling is itself a form of hack, and so the source of that attack is also obscured. Propaganda flares about both hacks (the one for data access, and the skooling). But if enough of the targets are effected (maybe they change how they vote in an election, or don’t vote at all) then the conversion rate is good enough and worth the investment.

Economics and Expertise

Of course, it would be simplistic to assume that every part of this value chain is performed by the same vertically integrated organization. Previous research on the spam value chain has shown how spam is an industry with many different required resources. Bot-nets are used to send mass emails; domain names are rented to host target web sites; there are even real pharmaceutical companies producing real knock-off viagra for those who have been coaxed into buying it. (See Kanich et al. 2008; Levchenko et al. 2011) Just like in a real industry, these different resources or part of the supply chain need not be all controlled under the same organization. On the contrary, the cybercrime economy is highly segmented into many different independent actors with limited knowledge of each other precisely because this makes it harder to catch them. So, for example, somebody that owns a botnet will rent out that botnet to a spammer who will then contract with a supplier.

Should we expect the skooling economy to work any differently? This depends a little on the arms race between social media bot creators and social media abuse detection and reporting. This has been a complex matter for some time, particularly because it is not always in a social media company’s interest to reject all bot activity as abuse even when this activity can be detected. Skooling is good for Twitter’s business, arguably.

But it may well be the case that the expertise in setting up influential clusters of bots to augment the power of some ideological block may be available in a more or less mercenary way. A particular cluster of bots in social media may or may not be positioned for a specific form of ideological attack or target; in that case the asset is not as as multipurpose as a standard botnet, which can run many different kinds of programs from spam to denial of service. (These are empirical questions and at the moment I don’t know the answers.)

The point is that because of the complexity of the supply chain, attribution need not be straightforward at all. Taking for example the alleged “alt-right” social media bot clusters, these clusters could be paid for (and their agendas influenced) by a succession of different actors (including right wing Americans, Russians, and whoever else.) There is certainly the potential for false flag operations if the point of the attack is to make it appear that somebody else has transgressed.

Naturally these subtleties don’t help the public understand what is happening to them. If they are aware of being skooled, it would be lucky. If they can attribute it to one party involved correctly, that is even luckier.

But to be realistic, most won’t have any idea this is happening, or happening to them.

Which brings me to my last point about this, which is the role of cybersecurity expertise in the propaganda cyberwar. Let me define cybersecurity expertise as the skill set necessary to identify and analyze hacks. Of course this form of expertise isn’t monolithic as there are many different attack vectors for hacks and understanding different physical and virtual vectors requires different skills. But knowing which skills are relevant in which contexts is for our purposes just another part of cybersecurity expertise which makes it more inscrutable to those that don’t have it. Cybersecurity expertise is also the kind of expertise you need to execute a hack (as defined above), though again this is a different variation of the skill set. I suppose it’s a bit like the Dark Arts in Harry Potter.

Because in the propaganda cyberwar the media through which people craft their sense of shared reality is vulnerable to cyberattacks, this gives both hackers and cybersecurity experts extraordinary new political powers. Both offensive and defensive security experts are likely to be for hire. There’s a marketplace for their first-order expertise, and then there’s a media marketplace for second-order reporting of the outcomes of their forensic judgments. The results of cybersecurity forensics need not be faithfully reported.

Outcomes

I don’t know what the endgame for this is. If I had to guess, I’d say one of two outcomes is likely. The first is that social media becomes more untrusted as a source of information as the amount of skooling increases. This doesn’t mean that people would stop trusting information from on-line sources, but it does mean that they would pick which on-line sources they trust and read them specifically instead of trusting what people they know share generally. If social media gets less determinative of people’s discovery and preferences for media outlets, then they are likely to pick sources that reflect their off-line background instead. This gets us back into the discussion of propaganda in the beginning of this post. In this case, we would expect skooling to continue, but be relegated to the background like spamming has been. There will be people who fall prey to it and that may be relevant for political outcomes, but it will become, like spam, a normal fact of life and no longer newsworthy. The vulnerability of the population to skooling and other propaganda cyberwarfare will be due to their out-of-band, offline education and culture.

Another possibility is that an independent, trusted, international body of cybersecurity experts becomes involved in analyzing and vetting skooling campaigns and other political hacks. This would have all the challenges of establishing scientific consensus as well as solving politicized high-profile crimes. Of course it would have enemies. But if it were trusted enough, it could become the pillar of political sanity that prevents a downslide into perpetual chaos.

I suppose there are intermediary outcomes as well where multiple poles of trusted cybersecurity experts weigh in and report on hacks in ways that reflect the capital-rich interests that hire them. Popular opinion follows these authorities as they have done for centuries. Nations maintain themselves, and so on.

Is it fair to say that propaganda cyberware is “the new normal”? It’s perhaps a trite thing to say. For it to be true, just two things must be true. First, it has to be new: it must be happening now, as of recently. I feel I must say this obvious fact only because I recently saw “the new normal” used to describe a situation that in fact was not occurring at all. I believe the phrase du jour for that sort of writing is “fake news”.

I do believe the propaganda cyberwar is new, or at least newly prominent because of Russiagate. We are sensitized to the political use of hacks now in a way that we haven’t been before.

The second requirement is that the new situation becomes normal, ongoing and unremarkable. Is the propaganda cyberwar going to be normal? I’ve laid out what I think are the potential outcomes. In some of them, indeed it does become normal. I prefer the outcomes that result in trusted scientific institutions partnering with criminal justice investigations in an effort to maintain world peace in a more modernist fashion. I suppose we shall have to see how things go.

References

Kanich, C., Kreibich, C., Levchenko, K., Enright, B., Voelker, G.M., Paxson, V. and Savage, S., 2008, October. Spamalytics: An empirical analysis of spam marketing conversion. In Proceedings of the 15th ACM conference on Computer and communications security (pp. 3-14). ACM.

Levchenko, K., Pitsillidis, A., Chachra, N., Enright, B., Félegyházi, M., Grier, C., Halvorson, T., Kanich, C., Kreibich, C., Liu, H. and McCoy, D., 2011, May. Click trajectories: End-to-end analysis of the spam value chain. In Security and Privacy (SP), 2011 IEEE Symposium on (pp. 431-446). IEEE.