Reuters reports on the Washington Post’s report, citing U.S. intelligence officials, that the UAE arranged for hacking of Qatar government sites posting “fiery but false” quotes from Qatar’s emir. This was used to justify Saudi Arabia, the UAE, Egypt, and Bahrain to cut diplomatic and transport ties with Qatar.
Qatar says the quotes from the emir are fake, posted by hackers. U.S. intelligence officials now say (to the Post) that they have information about UAE discussing the hacks before they occur.
UAE denies the hacks, saying the reports of them are false, and argues that what is politically relevant is Qatar’s Islamist activities.
What a mess.
One can draw a comparison between these happenings in the Middle East and the U.S.’s Russiagate.
The comparison is difficult because any attempt to summarize what is going on with Russiagate runs into the difficulty of aligning with the narrative of one party or another who is presently battling for the ascendancy of their interpretation. But for clarity let me say that by Russiagate I mean the complex of allegations and counterclaims including: that the Russian government, or some Russians who were not associated with the government, or somebody else hacked the DNC and leaked their emails to influence the 2016 election (or its perceived legitimacy); that the Russian government (or maybe somebody else…) prop up alt-right media bots to spread “fake news” to swing voters; that swing voters were identified through the hacking of election records; that some or all of these allegations are false and promoted by politicized media outlets; that if the allegations are true, their impact on the outcome of the 2016 election is insufficient to have changed the outcome (hence not delegitimizing the outcome); the diplomatic spat over recreational compounds used by Russians in the U.S. and by the U.S. in Russia that is now based on the fact that the outgoing administration wanted to reprimand Russia for alleged hacks that allegedly led to its party’s loss of control of the government….
It is dizzying. In both the Qatari and U.S. cases, without very privileged inside knowledge we are left with vague and uncertain impressions of a new condition:
- the relentless rate at which “new developments” in these stories is made available or recapitulated or commented on
- the weakness with which they are confirmed or denied (because they are due to anonymous officials or unaccountable leaks)
- our dependence on trusted authorities for our understanding of the problem when that trust is constantly being eroded
- the variety of positions taken on any particular event, and the accessibility of these diverse views
Is any of this new? Maybe it’s fair to say it’s “increasing”, as the Internet has continuously inflated the speed and variety and scale of everything in the media, or seemed to.
I have no wish to recapitulate the breathless hyperbole about how media is changing “online”; this panting has been going on continuously for fifteen years at least. But recently I did see what seemed like a new insight among the broader discussion. Once, we were warned against the dangers of filter bubbles, the technologically reinforced perspectives we take when social media and search engines are trained on our preferences. Buzzfeed admirably tried to design a feature to get people Out of Their Bubble, but that got an insightful reaction from Rachel Haser:
In my experience, people understand that other opinions exist, and what the opinions are. What people don’t understand is where the opinions come from, and they don’t care to find out for themselves.
In other words: it is not hard for somebody to get out of their own bubble. Somebody’s else’s opinion is just a click or a search away. Among the narrow dichotomies of the U.S.’s political field, I’m constantly being told by the left-wing media who the right-wing pundits are and what they are saying, and why they are ridiculous. The right-wing media is constantly reporting on what left-wing people are doing and why they are ridiculous. If I ever want to verify for myself I can simply watch a video or read and article from a different point of view.
None of this access to alternative information will change my mind because my habitus is already set by my life circumstances and offline social and institutional relationships. The semiotic environment does not determine my perspective; the economic environment does. What the semiotic environment provides is, one way or another, an elaborate system of propaganda which reflects the possible practical and political alliances that are available for the deployment of capital. Most of what is said in “the media” is true; most of what is said in “the media” is spun; for the purpose of this post and to distinguish it from responsible scientific research or reporting of “just the facts”, which does happen (!), I will refer to it generically as propaganda.
Propaganda is obviously not new. Propaganda on the Internet is as new as the Internet. As the Internet expands (via smartphones and “things”), so too does propaganda. This is one part of the story here.
The second part of the story is all the hacks.
What are hacks? Technically, a hack can be many different kinds of interventions into a (socio)technical system that creates behavior unexpected by the designer or owner of the system. It is a use or appropriation by somebody (the hacker) of somebody else’s technology, for the former’s advantage. Some example things that hacks can accomplish include: taking otherwise secret data, modifying data, and causing computers or networks to break down.
“CIA”, but Randall Munroe
There are interesting reasons why hacks have special social and political relevance. One important thing about computer hacking is that it requires technical expertise to understand how it works. This puts the analysis of a hack, and especially the attribution of the hack to some actor, in the hands of specialists. In this sense, “solving” a hack is like “solving” a conventional crime. It requires forensic experts, detectives who understand the motivation of potential suspects, and so on.
Another thing about hacks over the Internet is that they can come from “anywhere”, because Internet. This makes it harder to find hackers and also makes hacks convenient tools for transnational action. It has been argued that as the costs of physical violent war increase with an integrated global economy, the use of cyberwar as a softer alternative will rise.
In the cases described at the beginning of this post, hacks play many different roles:
- a form of transgression, requiring apology, redress, or retaliation
- a kind of communication, sending a message (perhaps true, or perhaps false) to an audience
- the referent of communication, what is being discussed, especially with respect to its attribution (which is necessary for apology, redress, retaliation)
The difficulty with reporting about hacks, at least as far as reporting to the nonexpert public goes, is that every hack raises the specter of uncertainty about where it came from, whether it was as significant as the reporters say, whether the suspects have been framed, and so on.
If a propaganda war is a fire, cyberwar throws gasoline on the flame, because all the political complexity of the media can fracture the narrative around each hack until it too goes up in meaningless postmodern smoke.
I am including, by the way, the use of bots to promote content in social media as a “hack”. I’m blending slightly two meanings of “hack”: the more benign “MIT” sense of hack as a creative technical solution to a problem and the more specific sense of one who obviates computer security. Since the latter sense of “hack” has expanded to include social engineering efforts such as phishing, the automated influence of social media to present a false or skewed narrative as true seems to also fit here.
I have to say that this sort of media hacking–creating bots to spread “fake news” and so on–doesn’t have a succinct name yet, I propose “skooling” or “sk00ling”, since
- it’s a phrase that means something similar to “pwning”/”owning”
- the activity is like “phishing” in the sense that it is automated social engineering, but en masse (i.e. a school of fish)
- the point of the hack is to “teaching” people something (i.e. some news or rumor), so to speak.
It turns out that this sort of media hacking isn’t just the bailiwick of shadowing intelligence agencies and organized cybercriminals. Run-of-the-mill public relations firms like Bell Potinger can do it. NatReferencesurally this is not considered on par with computer security crime, though there is a sense in which it is a kind of computer mediated fraud.
Putting it all together, we can imagine a sophisticated form of propaganda cyberwar campaign that goes something like this: an attacker collects data to identify about targets vulnerable to persuasion via hacks and other ways of collecting publicly or commercially available personal data. It does its best to cover its tracks to get plausible deniability. Then they skool the targets to create the desired effect. The skooling is itself a form of hack, and so the source of that attack is also obscured. Propaganda flares about both hacks (the one for data access, and the skooling). But if enough of the targets are effected (maybe they change how they vote in an election, or don’t vote at all) then the conversion rate is good enough and worth the investment.
Economics and Expertise
Of course, it would be simplistic to assume that every part of this value chain is performed by the same vertically integrated organization. Previous research on the spam value chain has shown how spam is an industry with many different required resources. Bot-nets are used to send mass emails; domain names are rented to host target web sites; there are even real pharmaceutical companies producing real knock-off viagra for those who have been coaxed into buying it. (See Kanich et al. 2008; Levchenko et al. 2011) Just like in a real industry, these different resources or part of the supply chain need not be all controlled under the same organization. On the contrary, the cybercrime economy is highly segmented into many different independent actors with limited knowledge of each other precisely because this makes it harder to catch them. So, for example, somebody that owns a botnet will rent out that botnet to a spammer who will then contract with a supplier.
Should we expect the skooling economy to work any differently? This depends a little on the arms race between social media bot creators and social media abuse detection and reporting. This has been a complex matter for some time, particularly because it is not always in a social media company’s interest to reject all bot activity as abuse even when this activity can be detected. Skooling is good for Twitter’s business, arguably.
But it may well be the case that the expertise in setting up influential clusters of bots to augment the power of some ideological block may be available in a more or less mercenary way. A particular cluster of bots in social media may or may not be positioned for a specific form of ideological attack or target; in that case the asset is not as as multipurpose as a standard botnet, which can run many different kinds of programs from spam to denial of service. (These are empirical questions and at the moment I don’t know the answers.)
The point is that because of the complexity of the supply chain, attribution need not be straightforward at all. Taking for example the alleged “alt-right” social media bot clusters, these clusters could be paid for (and their agendas influenced) by a succession of different actors (including right wing Americans, Russians, and whoever else.) There is certainly the potential for false flag operations if the point of the attack is to make it appear that somebody else has transgressed.
Naturally these subtleties don’t help the public understand what is happening to them. If they are aware of being skooled, it would be lucky. If they can attribute it to one party involved correctly, that is even luckier.
But to be realistic, most won’t have any idea this is happening, or happening to them.
Which brings me to my last point about this, which is the role of cybersecurity expertise in the propaganda cyberwar. Let me define cybersecurity expertise as the skill set necessary to identify and analyze hacks. Of course this form of expertise isn’t monolithic as there are many different attack vectors for hacks and understanding different physical and virtual vectors requires different skills. But knowing which skills are relevant in which contexts is for our purposes just another part of cybersecurity expertise which makes it more inscrutable to those that don’t have it. Cybersecurity expertise is also the kind of expertise you need to execute a hack (as defined above), though again this is a different variation of the skill set. I suppose it’s a bit like the Dark Arts in Harry Potter.
Because in the propaganda cyberwar the media through which people craft their sense of shared reality is vulnerable to cyberattacks, this gives both hackers and cybersecurity experts extraordinary new political powers. Both offensive and defensive security experts are likely to be for hire. There’s a marketplace for their first-order expertise, and then there’s a media marketplace for second-order reporting of the outcomes of their forensic judgments. The results of cybersecurity forensics need not be faithfully reported.
I don’t know what the endgame for this is. If I had to guess, I’d say one of two outcomes is likely. The first is that social media becomes more untrusted as a source of information as the amount of skooling increases. This doesn’t mean that people would stop trusting information from on-line sources, but it does mean that they would pick which on-line sources they trust and read them specifically instead of trusting what people they know share generally. If social media gets less determinative of people’s discovery and preferences for media outlets, then they are likely to pick sources that reflect their off-line background instead. This gets us back into the discussion of propaganda in the beginning of this post. In this case, we would expect skooling to continue, but be relegated to the background like spamming has been. There will be people who fall prey to it and that may be relevant for political outcomes, but it will become, like spam, a normal fact of life and no longer newsworthy. The vulnerability of the population to skooling and other propaganda cyberwarfare will be due to their out-of-band, offline education and culture.
Another possibility is that an independent, trusted, international body of cybersecurity experts becomes involved in analyzing and vetting skooling campaigns and other political hacks. This would have all the challenges of establishing scientific consensus as well as solving politicized high-profile crimes. Of course it would have enemies. But if it were trusted enough, it could become the pillar of political sanity that prevents a downslide into perpetual chaos.
I suppose there are intermediary outcomes as well where multiple poles of trusted cybersecurity experts weigh in and report on hacks in ways that reflect the capital-rich interests that hire them. Popular opinion follows these authorities as they have done for centuries. Nations maintain themselves, and so on.
Is it fair to say that propaganda cyberware is “the new normal”? It’s perhaps a trite thing to say. For it to be true, just two things must be true. First, it has to be new: it must be happening now, as of recently. I feel I must say this obvious fact only because I recently saw “the new normal” used to describe a situation that in fact was not occurring at all. I believe the phrase du jour for that sort of writing is “fake news”.
I do believe the propaganda cyberwar is new, or at least newly prominent because of Russiagate. We are sensitized to the political use of hacks now in a way that we haven’t been before.
The second requirement is that the new situation becomes normal, ongoing and unremarkable. Is the propaganda cyberwar going to be normal? I’ve laid out what I think are the potential outcomes. In some of them, indeed it does become normal. I prefer the outcomes that result in trusted scientific institutions partnering with criminal justice investigations in an effort to maintain world peace in a more modernist fashion. I suppose we shall have to see how things go.
Kanich, C., Kreibich, C., Levchenko, K., Enright, B., Voelker, G.M., Paxson, V. and Savage, S., 2008, October. Spamalytics: An empirical analysis of spam marketing conversion. In Proceedings of the 15th ACM conference on Computer and communications security (pp. 3-14). ACM.
Levchenko, K., Pitsillidis, A., Chachra, N., Enright, B., Félegyházi, M., Grier, C., Halvorson, T., Kanich, C., Kreibich, C., Liu, H. and McCoy, D., 2011, May. Click trajectories: End-to-end analysis of the spam value chain. In Security and Privacy (SP), 2011 IEEE Symposium on (pp. 431-446). IEEE.