Digifesto

Category: cybersecurity

Autonomy as link between privacy and cybersecurity

A key aspect of the European approach to privacy and data protection regulation is that it’s rooted in the idea of an individual’s autonomy. Unlike an American view of privacy which suggests that privacy is important only because it implies some kind of substantive harm—such as reputational loss or discrimination–in European law it’s understood that personal data matters because of its relevance to a person’s self-control.

Autonomy etymologically is “self-law”. It is traditionally associated with the concept of rationality and the ability to commit oneself to duty. My colleague Jake Goldenfein argues that autonomy is the principle that one has the power to express one’s own narrative about oneself, and for that narrative to have power. Uninterpretable and unaccountable surveillance, “nudging”, manipulation, profiling, social sorting, and so on are all in a sense an attack on autonomy. They interfere with the individual’s capacity to self-rule.

It is more rare to connect the idea of autonomy to cybersecurity, though here the etymology of the words also weighs in favor of it. Cyber- has its root in in Greek kybernetes, for steersman, governor, pilot, or rudder. To be secure means to be free from threat. So cybersecurity for a person or organization is the freedom of their (self-control) from external threat. Cybersecurity is the condition of being free to control oneself–to be autonomous.

Understood in this way, privacy is just one kind of cybersecurity: the cybersecurity of the individual person. We can speak additionally of the cybersecurity of a infrastructure, such as a power grid, or of an organization, such as a bank, or of a device, such as a smartphone. What both the privacy and cybersecurity discussions implicate are questions of the ontology of the entities involved and their ability to control themselves and control each other.

A few brief notes towards “Procuring Cybersecurity”

I’m shifting research focus a bit and wanted to jot down a few notes. The context for the shift is that I have the pleasure of organizing a roundtable discussion for NYU’s Center for Cybersecurity and Information Law Institute, working closely with Thomas Streinz of NYU’s Guarini Global Law and Tech.

The context for the workshop is the steady feed of news about global technology supply chains and how they are not just relevant to “cybersecurity”, but in some respects are constitutive of cyberinfrastructure and hence the field of its security.

I’m using “global technology supply chains” rather loosely here, but this includes:

  • Transborder personal data flows as used in e-commerce
  • Software- (and Infrastructure-)-as-a-Service being marketing internationally (including Google used abroad, for example)
  • Enterprise software import/export
  • Electronics manufacturing and distribution.

Many concerns about cybersecurity as a global phenomenon circulate around the imagined or actual supply chain. These are sometimes national security concerns that result in real policy, as when Australia recently banned Hauwei and ZTE from supplying 5G network equipment for fear that it would provide a vector of interference from the Chinese government.

But the nationalist framing is certainly not the whole story. I’ve heard anecdotally that after the Snowden revelations, Microsoft’s internally began to see the U.S. government as a cybersecurity “adversary“. Corporate tech vendors naturally don’t want to be known as being vectors for national surveillance, as this cuts down on their global market share.

Governments and corporations have different cybersecurity incentives and threat models. These models intersect and themselves create the dynamic cybersecurity field. For example, these Chinese government has viewed foreign software vendors as cybersecurity threats, and has responded by mandating source code disclosure. But as this is a vector of potential IP theft, foreign vendors have balked, seeing this mandate as a threat. (Ahmed and Weber, 2018).Complicating things further, a defensive “cybersecurity” measure can also serve the goal of protecting domestic technology innovation–which can be framed as providing a nationalist “cybersecurity” edge in the long run.

What, if anything, prevents a total cyberwar of all against all? One answer is trade agreements that level the playing field, or at least establish rules for the game. Another is open technology and standards, which provide an alternative field driven by the benefits of interoperability rather than proprietary interest and secrecy. Is it possible to capture any of this in accurate model or theory?

I love having the opportunity to explore these questions, as they are at the intersection of my empirical work on software supply chains (Benthall et al., 2016; Benthall, 2017) and also theoretical work on data economics in my dissertation. My hunch for some time has been that there’s a dearth of solid economics theory for the contemporary digital economy, and this is one way of getting at that.

References

Ahmed, S., & Weber, S. (2018). China’s long game in techno-nationalism. First Monday, 23(5). 

Benthall, S., Pinney, T., Herz, J. C., Plummer, K., Benthall, S., & Rostrup, S. (2016). An ecological approach to software supply chain risk management. In 15th Python in Science Conference.

Benthall, S. (2017, September). Assessing software supply chain risk using public data. In 2017 IEEE 28th Annual Software Technology Conference (STC) (pp. 1-5). IEEE.

the make or buy decision (TCE) in the software and cybersecurity

The paradigmatic case of transaction cost economics (TCE) is the make-or-buy decision. A firm, F, needs something, C. Do they make it in-house or do they buy it from somewhere else?

If the firm makes it in-house, they will incur some bureaucratic overhead costs in addition to the costs of production. But they will also be able to specialize C for their purposes. They can institute their own internal quality controls. And so on.

If the firm buys it on the open market from some other firm, say, G, they don’t pay the overhead costs. They do lose the benefits of specialization, and the quality controls are only those based on economic competitive pressure on suppliers.

There is an intermediate option, which is a contract between F and G which establishes an ongoing relationship between the two firms. This contract creates a field in which C can be specialized for F, and there can be assurances of quality, while the overhead is distributed efficiently between F and G.

This situation is both extremely common in business practice and not well handled by neoclassical, orthodox economics. It’s the case that TCE is tremendously preoccupied with.


My background and research is in the software industry, which is rife with cases like these.

Developers are constantly faced with a decision to make-or-buy software components. In principle, they can developer any component themselves. In practice, this is rarely cost-effective.

In software, open source software components are a prevalent solution to this problem. This can be thought of as a very strange market where all the prices are zero. The most popular open source libraries are very generic , having little “asset specificity” in TCE terms.

The lack of contract between developers and open source components/communities is sometimes seen as a source of hazard in using open source components. The recent event-stream hack, where an upstream component was injected with malicious code by a developer who had taken over maintaining the package, illustrates the problems of outsourcing technical dependencies without a contract. In this case, the quality problem is manifest as a supply chain cybersecurity problem.

In Williamson’s analysis, these kinds of hazards are what drive firms away from purchasing on spot markets and towards contracting or in-house development. In practice, the role of open source support companies fills the role of being a responsible entity G that firm F can build a relationship with.

Is competition good for cybersecurity?

A question that keeps coming up in various forms, but for example in response to recent events around the ‘trade war’ between the U.S. and China and its impact on technology companies, is whether or not market competition is good or bad for cyber-security.

Here is a simple argument for why competition could be good for cyber-security: The security of technical products is a positive quality of them, something that consumers would like. Market competition is what gets producers to make higher quality products at lower cost. Therefore, competition is good for security.

Here is an argument for why competition could be bad for cyber-security: Security is a hard thing for any consumer to understand; since most won’t, we have an information asymmetry here and therefore a ‘market for lemons’ kind of market failure. Therefore, competition is bad for security. It would be better to have a well-regulated monopoly.

This argument echoes, though it doesn’t exactly parallel, some of the arguments in Pasquale’s work on Hamiltonian’s and Jeffersonian’s in technology platform regulation.

“the privatization of public functions”

An emerging theme from the conference on Trade Secrets and Algorithmic Systems was that legal scholars have become concerned about the privatization of public functions. For example, the use of proprietary risk assessment tools instead of the discretion of judges who are supposed to be publicly accountable is a problem. More generally, use of “trade secrecy” in court settings to prevent inquiry into software systems is bogus and moves more societal control into the realm of private ordering.

Many remedies were proposed. Most involved some kind of disclosure and audit to experts. The most extreme form of disclosure is making the software and, where it’s a matter of public record, training data publicly available.

It is striking to me to be encountering the call for government use of open source systems because…this is not a new issue. The conversation about federal use of open source software was alive and well over five years ago. Then, the arguments were about vendor lock-in; now, they are about accountability of AI. But the essential problem of whether core governing logic should be available to public scrutiny, and the effects of its privatization, have been the same.

If we are concerned with the reliability of a closed and large-scale decision-making process of any kind, we are dealing with problems of credibility, opacity, and complexity. The prospects of an efficient market for these kinds of systems are dim. These market conditions are the conditions of sustainability of open source infrastructure. Failures in sustainability are manifest as software vulnerabilities, which are one of the key reasons why governments are warned against OSS now, though the process of measurement and evaluation of OSS software vulnerability versus proprietary vulnerabilities is methodologically highly fraught.

What proportion of data protection violations are due to “dark data” flows?

“Data protection” refers to the aspect of privacy that is concerned with the use and misuse of personal data by those that process it. Though widely debated, scholars continue to converge (e.g.) on ideal data protection consisting of alignment between the purposes the data processor will use the data for and the expectations of the user, along with collection limitations that reduce exposure to misuse. Through its extraterritorial enforcement mechanism, the GDPR has threatened to make these standards global.

The implication of these trends is that there will be a global field of data flows regulated by these kinds of rules. Many of the large and important actors that process user data can be held accountable to the law. Privacy violations by these actors will be due to a failure to act within the bounds of the law that applies to them.

On the other hand, there is also cybercrime, an economy of data theft and information flows that exists “outside the law”.

I wonder what proportion of data protection violations are due to dark data flows–flows of personal data that are handled by organizations operating outside of any effective regulation.

I’m trying to draw an analogy to a global phenomenon that I know little about but which strikes me as perhaps more pressing than data protection: the interrelated problems of money laundering, off-shore finance, and dark money contributions to election campaigns. While surely oversimplifying the issue, my impression is that the network of financial flows can be divided into those that are more and less regulated by effective global law. Wealth seeks out these opportunities in the dark corners.

How much personal data flows in these dark networks? And how much is it responsible for privacy violations around the world? Versus how much is data protection effectively in the domain of accountable organizations (that may just make mistakes here and there)? Or is the dichotomy false, with truly no firm boundary between licit and illicit data flow networks?

Note on Austin’s “Cyber Policy in China”: on the emphasis on ‘ethics’

I’ve had recommended to me Greg Austin’s “Cyber Policy in China” (2014) as a good, recent work. I am not sure what I was expecting–something about facts and numbers, how companies are being regulated, etc. Just looking at the preface, it looks like this book is about something else.

The preface frames the book in the discourse, beginning in the 20th century, about the “information society”. It explicitly mentions the UN’s World Summit on the Information Society (WSIS) as a touchstone of international consensus about what the information society is, as society “where everyone can create, access, utilise and share information and knowledge’ to ‘achieve their full potential’ in ‘improving their quality of life’. It is ‘people-centered’.

In Chinese, the word for information society is xinxi shehui (Please forgive me: I’ve got little to know understanding of the Chinese language and that includes not knowing how to put the appropriate diacritics into transliterations of Chinese terms.) It is related to a term “informatization” (xinxihua) that is compared to industrialization. It means the historical process by which information technology is fully used, information resources are developed and utilized, the exchange of information and knowledge sharing are promoted, the quality of economic growth is improved, and the transformation of economic and social development is promoted”. Austin’s interesting point is that this is “less people-centered than the UN vision and more in the mould of the materialist and technocratic traditions that Chinese Communists have preferred.”

This is an interesting statement on the difference between policy articulations by the United Nations and the CCP. It does not come as a surprise.

What did come as a surprise is how Austin chooses to orient his book.

On the assumption that outcomes in the information society are ethically determined, the analytical framework used in the book revolves around ideal policy values for achieving an advanced information society. This framework is derived from a study of ethics. Thus, the analysis is not presented as a work of social science (be that political science, industry policy or strategic studies). It is more an effort to situate the values of China’s leaders within an ethical framework implied by their acceptance of the ambition to become and advanced information society.

This comes as a surprise to me because what I was expected from a book titled “Cyber Policy in China” is really something more like industry policy or strategic studies. I was not ready for, and am frankly a bit disappointed by, the idea that this is really a work of applied philosophy.

Why? I do love philosophy as a discipline and have studied it carefully for many years. I’ve written and published about ethics and technological design. But my conclusion after so much study is that “the assumption that outcomes in the information society are ethically determined” is totally incorrect. I have been situated for some time in discussions of “technology ethics” and my main conclusion from them is that (a) “ethics” in this space are more often than not an attempt to universalize what are more narrow political and economic interests, and that (b) “ethics” are constantly getting compromised by economic motivations as well as the mundane difficulty of getting information technology to work as it is intended to in a narrow, functionally defined way. The real world is much bigger and more complex than any particular ethical lens can take in. Attempt to define technological change in terms of “ethics” are almost always a political maneuver, for good or for ill, of some kind that is reducing the real complexity of technological development into a soundbite. A true ethical analysis of cyber policy would need to address industrial policy and strategic aspects, as this is what drives the “cyber” part of it.

The irony is that there is something terribly un-emic about this approach. By Austin’s own admission, the CCP cyber policy is motivated by material concerns about the distribution of technology and economic growth. Austin could have approached China’s cyber policy in the technocratic terms they see themselves in. But instead Austin’s approach is “human-centered”, with a focus on leaders and their values. I already doubt the research on anthropological grounds because of the distance between the researcher and the subjects.

So I’m not sure what to do about this book. The preface makes it sound like it belongs to a genre of scholarship that reads well, and maybe does important ideological translation work, but does provide something like scientific knowledge of China’s cyber policy, which is what I’m most interested in. Perhaps I should move on, or take other recommendations for reading on this topic.

search engines and authoritarian threats

I’ve been intrigued by Daniel Griffin’s tweets lately, which have been about situating some upcoming work of his an Deirdre Mulligan’s regarding the experience of using search engines. There is a lively discussion lately about the experience of those searching for information and the way they respond to misinformation or extremism that they discover through organic use of search engines and media recommendation systems. This is apparently how the concern around “fake news” has developed in the HCI and STS world since it became an issue shortly after the 2016 election.

I do not have much to add to this discussion directly. Consumer misuse of search engines is, to me, analogous to consumer misuse of other forms of print media. I would assume to best solution to it is education in the complete sense, and the problems with the U.S. education system are, despite all good intentions, not HCI problems.

Wearing my privacy researcher hat, however, I have become interested in a different aspect of search engines and the politics around them that is less obvious to the consumer and therefore less popularly discussed, but I fear is more pernicious precisely because it is not part of the general imaginary around search. This is the aspect that is around the tracking of search engine activity, and what it means for this activity to be in the hands of not just such benevolent organizations such as Google, but also such malevolent organizations such as Bizarro World Google*.

Here is the scenario, so to speak: for whatever reason, we begin to see ourselves in a more adversarial relationship with search engines. I mean “search engine” here in the broad sense, including Siri, Alexa, Google News, YouTube, Bing, Baidu, Yandex, and all the more minor search engines embedded in web services and appliances that do something more focused than crawl the whole web. By ‘search engine’ I mean entire UX paradigm of the query into the vast unknown of semantic and semiotic space that contemporary information access depends on. In all these cases, the user is at a systematic disadvantage in the sense that their query is a data point amount many others. The task of the search engine is to predict the desired response to the query and provide it. In return, the search engine gets the query, tied to the identity of the user. That is one piece of a larger mosaic; to be a search engine is to have a picture of a population and their interests and the mandate to categorize and understand those people.

In Western neoliberal political systems the central function of the search engine is realized as commercial transaction facilitating other commercial transactions. My “search” is a consumer service; I “pay” for this search by giving my query to the adjoined advertising function, which allows other commercial providers to “search” for me, indirectly, through the ad auction platform. It is a market with more than just two sides. There’s the consumer who wants information and may be tempted by other information. There are the primary content providers, who satisfy consumer content demand directly. And there are secondary content providers who want to intrude on consumer attention in a systematic and successful way. The commercial, ad-enabled search engine reduces transaction costs for the consumer’s search and sells a fraction of that attentional surplus to the advertisers. Striking the right balance, the consumer is happy enough with the trade.

Part of the success of commercial search engines is the promise of privacy in the sense that the consumer’s queries are entrusted secretly with the engine, and this data is not leaked or sold. Wise people know not to write into email things that they would not want in the worst case exposed to the public. Unwise people are more common than wise people, and ill-considered emails are written all the time. Most unwise people do not come to harm because of this because privacy in email is a de facto standard; it is the very security of email that makes the possibility of its being leaked alarming.

So to with search engine queries. “Ask me anything,” suggests the search engine, “I won’t tell”. “Well, I will reveal your data in an aggregate way; I’ll expose you to selective advertising. But I’m a trusted intermediary. You won’t come to any harms besides exposure to a few ads.”

That is all a safe assumption until it isn’t, at which point we must reconsider the role of the search engine. Suppose that, instead of living in a neoliberal democracy where the free search for information was sanctioned as necessary for the operation of a free market, we lived in an authoritarian country organized around the principle that disloyalty to the state should be crushed.

Under these conditions, the transition of a society into one that depends for its access to information on search engines is quite troubling. The act of looking for information is a political signal. Suppose you are looking for information about an extremist, subversive ideology. To do so is to flag yourself as a potential threat of the state. Suppose that you are looking for information about a morally dubious activity. To do so is to make yourself vulnerable to kompromat.

Under an authoritarian regime, curiosity and free thought are a problem, and a problem that are readily identified by ones search queries. Further, an authoritarian regime benefits if the risks of searching for the ‘wrong’ thing are widely known, since it suppresses inquiry. Hence, the very vaguely announced and, in fact, implausible to implement Social Credit System in China does not need to exist to be effective; people need only believe it exists for it to have a chilling and organizing effect on behavior. That is the lesson of the Foucouldean panopticon: it doesn’t need a guard sitting in it to function.

Do we have a word for this function of search engines in an authoritarian system? We haven’t needed one in our liberal democracy, which perhaps we take for granted. “Censorship” does not apply, because what’s at stake is not speech but the ability to listen and learn. “Surveillance” is too general. It doesn’t capture the specific constraints on acquiring information, on being curious. What is the right term for this threat? What is the term for the corresponding liberty?

I’ll conclude with a chilling thought: when at war, all states are authoritarian, to somebody. Every state has an extremist, subversive ideology that it watches out for and tries in one way or another to suppress. Our search queries are always of strategic or tactical interest to somebody. Search engine policies are always an issue of national security, in one way or another.

“Context, Causality, and Information Flow: Implications for Privacy Engineering, Security, and Data Economics” <– My dissertation

In the last two weeks, I’ve completed, presented, and filed my dissertation, and commenced as a doctor of philosophy. In a word, I’ve PhinisheD!

The title of my dissertation is attention-grabbing, inviting, provocative, and impressive:

“Context, Causality, and Information Flow: Implications for Privacy Engineering, Security, and Data Economics”

If you’re reading this, you are probably wondering, “How can I drop everything and start reading that hot dissertation right now?”

Look no further: here is a link to the PDF.

You can also check out this slide deck from my “defense”. It covers the highlights.

I’ll be blogging about this material as I break it out into more digestible forms over time. For now, I’m obviously honored by any interest anybody takes in this work and happy to answer questions about it.

Propaganda cyberwar: the new normal?

Reuters reports on the Washington Post’s report, citing U.S. intelligence officials, that the UAE arranged for hacking of Qatar government sites posting “fiery but false” quotes from Qatar’s emir. This was used to justify Saudi Arabia, the UAE, Egypt, and Bahrain to cut diplomatic and transport ties with Qatar.

Qatar says the quotes from the emir are fake, posted by hackers. U.S. intelligence officials now say (to the Post) that they have information about UAE discussing the hacks before they occur.

UAE denies the hacks, saying the reports of them are false, and argues that what is politically relevant is Qatar’s Islamist activities.

What a mess.

One can draw a comparison between these happenings in the Middle East and the U.S.’s Russiagate.

The comparison is difficult because any attempt to summarize what is going on with Russiagate runs into the difficulty of aligning with the narrative of one party or another who is presently battling for the ascendancy of their interpretation. But for clarity let me say that by Russiagate I mean the complex of allegations and counterclaims including: that the Russian government, or some Russians who were not associated with the government, or somebody else hacked the DNC and leaked their emails to influence the 2016 election (or its perceived legitimacy); that the Russian government (or maybe somebody else…) prop up alt-right media bots to spread “fake news” to swing voters; that swing voters were identified through the hacking of election records; that some or all of these allegations are false and promoted by politicized media outlets; that if the allegations are true, their impact on the outcome of the 2016 election is insufficient to have changed the outcome (hence not delegitimizing the outcome); the diplomatic spat over recreational compounds used by Russians in the U.S. and by the U.S. in Russia that is now based on the fact that the outgoing administration wanted to reprimand Russia for alleged hacks that allegedly led to its party’s loss of control of the government….

Propaganda

It is dizzying. In both the Qatari and U.S. cases, without very privileged inside knowledge we are left with vague and uncertain impressions of a new condition:

  • the relentless rate at which “new developments” in these stories is made available or recapitulated or commented on
  • the weakness with which they are confirmed or denied (because they are due to anonymous officials or unaccountable leaks)
  • our dependence on trusted authorities for our understanding of the problem when that trust is constantly being eroded
  • the variety of positions taken on any particular event, and the accessibility of these diverse views

Is any of this new? Maybe it’s fair to say it’s “increasing”, as the Internet has continuously inflated the speed and variety and scale of everything in the media, or seemed to.

I have no wish to recapitulate the breathless hyperbole about how media is changing “online”; this panting has been going on continuously for fifteen years at least. But recently I did see what seemed like a new insight among the broader discussion. Once, we were warned against the dangers of filter bubbles, the technologically reinforced perspectives we take when social media and search engines are trained on our preferences. Buzzfeed admirably tried to design a feature to get people Out of Their Bubble, but that got an insightful reaction from Rachel Haser:

In my experience, people understand that other opinions exist, and what the opinions are. What people don’t understand is where the opinions come from, and they don’t care to find out for themselves.

In other words: it is not hard for somebody to get out of their own bubble. Somebody’s else’s opinion is just a click or a search away. Among the narrow dichotomies of the U.S.’s political field, I’m constantly being told by the left-wing media who the right-wing pundits are and what they are saying, and why they are ridiculous. The right-wing media is constantly reporting on what left-wing people are doing and why they are ridiculous. If I ever want to verify for myself I can simply watch a video or read and article from a different point of view.

None of this access to alternative information will change my mind because my habitus is already set by my life circumstances and offline social and institutional relationships. The semiotic environment does not determine my perspective; the economic environment does. What the semiotic environment provides is, one way or another, an elaborate system of propaganda which reflects the possible practical and political alliances that are available for the deployment of capital. Most of what is said in “the media” is true; most of what is said in “the media” is spun; for the purpose of this post and to distinguish it from responsible scientific research or reporting of “just the facts”, which does happen (!), I will refer to it generically as propaganda.

Propaganda is obviously not new. Propaganda on the Internet is as new as the Internet. As the Internet expands (via smartphones and “things”), so too does propaganda. This is one part of the story here.

The second part of the story is all the hacks.

Hacks

What are hacks? Technically, a hack can be many different kinds of interventions into a (socio)technical system that creates behavior unexpected by the designer or owner of the system. It is a use or appropriation by somebody (the hacker) of somebody else’s technology, for the former’s advantage. Some example things that hacks can accomplish include: taking otherwise secret data, modifying data, and causing computers or networks to break down.

“CIA”, but Randall Munroe

There are interesting reasons why hacks have special social and political relevance. One important thing about computer hacking is that it requires technical expertise to understand how it works. This puts the analysis of a hack, and especially the attribution of the hack to some actor, in the hands of specialists. In this sense, “solving” a hack is like “solving” a conventional crime. It requires forensic experts, detectives who understand the motivation of potential suspects, and so on.

Another thing about hacks over the Internet is that they can come from “anywhere”, because Internet. This makes it harder to find hackers and also makes hacks convenient tools for transnational action. It has been argued that as the costs of physical violent war increase with an integrated global economy, the use of cyberwar as a softer alternative will rise.

In the cases described at the beginning of this post, hacks play many different roles:

  • a form of transgression, requiring apology, redress, or retaliation
  • a kind of communication, sending a message (perhaps true, or perhaps false) to an audience
  • the referent of communication, what is being discussed, especially with respect to its attribution (which is necessary for apology, redress, retaliation)

The difficulty with reporting about hacks, at least as far as reporting to the nonexpert public goes, is that every hack raises the specter of uncertainty about where it came from, whether it was as significant as the reporters say, whether the suspects have been framed, and so on.

If a propaganda war is a fire, cyberwar throws gasoline on the flame, because all the political complexity of the media can fracture the narrative around each hack until it too goes up in meaningless postmodern smoke.

Skooling?

I am including, by the way, the use of bots to promote content in social media as a “hack”. I’m blending slightly two meanings of “hack”: the more benign “MIT” sense of hack as a creative technical solution to a problem and the more specific sense of one who obviates computer security. Since the latter sense of “hack” has expanded to include social engineering efforts such as phishing, the automated influence of social media to present a false or skewed narrative as true seems to also fit here.

I have to say that this sort of media hacking–creating bots to spread “fake news” and so on–doesn’t have a succinct name yet, I propose “skooling” or “sk00ling”, since

  • it’s a phrase that means something similar to “pwning”/”owning”
  • the activity is like “phishing” in the sense that it is automated social engineering, but en masse (i.e. a school of fish)
  • the point of the hack is to “teaching” people something (i.e. some news or rumor), so to speak.

It turns out that this sort of media hacking isn’t just the bailiwick of shadowing intelligence agencies and organized cybercriminals. Run-of-the-mill public relations firms like Bell Potinger can do it. NatReferencesurally this is not considered on par with computer security crime, though there is a sense in which it is a kind of computer mediated fraud.

Putting it all together, we can imagine a sophisticated form of propaganda cyberwar campaign that goes something like this: an attacker collects data to identify about targets vulnerable to persuasion via hacks and other ways of collecting publicly or commercially available personal data. It does its best to cover its tracks to get plausible deniability. Then they skool the targets to create the desired effect. The skooling is itself a form of hack, and so the source of that attack is also obscured. Propaganda flares about both hacks (the one for data access, and the skooling). But if enough of the targets are effected (maybe they change how they vote in an election, or don’t vote at all) then the conversion rate is good enough and worth the investment.

Economics and Expertise

Of course, it would be simplistic to assume that every part of this value chain is performed by the same vertically integrated organization. Previous research on the spam value chain has shown how spam is an industry with many different required resources. Bot-nets are used to send mass emails; domain names are rented to host target web sites; there are even real pharmaceutical companies producing real knock-off viagra for those who have been coaxed into buying it. (See Kanich et al. 2008; Levchenko et al. 2011) Just like in a real industry, these different resources or part of the supply chain need not be all controlled under the same organization. On the contrary, the cybercrime economy is highly segmented into many different independent actors with limited knowledge of each other precisely because this makes it harder to catch them. So, for example, somebody that owns a botnet will rent out that botnet to a spammer who will then contract with a supplier.

Should we expect the skooling economy to work any differently? This depends a little on the arms race between social media bot creators and social media abuse detection and reporting. This has been a complex matter for some time, particularly because it is not always in a social media company’s interest to reject all bot activity as abuse even when this activity can be detected. Skooling is good for Twitter’s business, arguably.

But it may well be the case that the expertise in setting up influential clusters of bots to augment the power of some ideological block may be available in a more or less mercenary way. A particular cluster of bots in social media may or may not be positioned for a specific form of ideological attack or target; in that case the asset is not as as multipurpose as a standard botnet, which can run many different kinds of programs from spam to denial of service. (These are empirical questions and at the moment I don’t know the answers.)

The point is that because of the complexity of the supply chain, attribution need not be straightforward at all. Taking for example the alleged “alt-right” social media bot clusters, these clusters could be paid for (and their agendas influenced) by a succession of different actors (including right wing Americans, Russians, and whoever else.) There is certainly the potential for false flag operations if the point of the attack is to make it appear that somebody else has transgressed.

Naturally these subtleties don’t help the public understand what is happening to them. If they are aware of being skooled, it would be lucky. If they can attribute it to one party involved correctly, that is even luckier.

But to be realistic, most won’t have any idea this is happening, or happening to them.

Which brings me to my last point about this, which is the role of cybersecurity expertise in the propaganda cyberwar. Let me define cybersecurity expertise as the skill set necessary to identify and analyze hacks. Of course this form of expertise isn’t monolithic as there are many different attack vectors for hacks and understanding different physical and virtual vectors requires different skills. But knowing which skills are relevant in which contexts is for our purposes just another part of cybersecurity expertise which makes it more inscrutable to those that don’t have it. Cybersecurity expertise is also the kind of expertise you need to execute a hack (as defined above), though again this is a different variation of the skill set. I suppose it’s a bit like the Dark Arts in Harry Potter.

Because in the propaganda cyberwar the media through which people craft their sense of shared reality is vulnerable to cyberattacks, this gives both hackers and cybersecurity experts extraordinary new political powers. Both offensive and defensive security experts are likely to be for hire. There’s a marketplace for their first-order expertise, and then there’s a media marketplace for second-order reporting of the outcomes of their forensic judgments. The results of cybersecurity forensics need not be faithfully reported.

Outcomes

I don’t know what the endgame for this is. If I had to guess, I’d say one of two outcomes is likely. The first is that social media becomes more untrusted as a source of information as the amount of skooling increases. This doesn’t mean that people would stop trusting information from on-line sources, but it does mean that they would pick which on-line sources they trust and read them specifically instead of trusting what people they know share generally. If social media gets less determinative of people’s discovery and preferences for media outlets, then they are likely to pick sources that reflect their off-line background instead. This gets us back into the discussion of propaganda in the beginning of this post. In this case, we would expect skooling to continue, but be relegated to the background like spamming has been. There will be people who fall prey to it and that may be relevant for political outcomes, but it will become, like spam, a normal fact of life and no longer newsworthy. The vulnerability of the population to skooling and other propaganda cyberwarfare will be due to their out-of-band, offline education and culture.

Another possibility is that an independent, trusted, international body of cybersecurity experts becomes involved in analyzing and vetting skooling campaigns and other political hacks. This would have all the challenges of establishing scientific consensus as well as solving politicized high-profile crimes. Of course it would have enemies. But if it were trusted enough, it could become the pillar of political sanity that prevents a downslide into perpetual chaos.

I suppose there are intermediary outcomes as well where multiple poles of trusted cybersecurity experts weigh in and report on hacks in ways that reflect the capital-rich interests that hire them. Popular opinion follows these authorities as they have done for centuries. Nations maintain themselves, and so on.

Is it fair to say that propaganda cyberware is “the new normal”? It’s perhaps a trite thing to say. For it to be true, just two things must be true. First, it has to be new: it must be happening now, as of recently. I feel I must say this obvious fact only because I recently saw “the new normal” used to describe a situation that in fact was not occurring at all. I believe the phrase du jour for that sort of writing is “fake news”.

I do believe the propaganda cyberwar is new, or at least newly prominent because of Russiagate. We are sensitized to the political use of hacks now in a way that we haven’t been before.

The second requirement is that the new situation becomes normal, ongoing and unremarkable. Is the propaganda cyberwar going to be normal? I’ve laid out what I think are the potential outcomes. In some of them, indeed it does become normal. I prefer the outcomes that result in trusted scientific institutions partnering with criminal justice investigations in an effort to maintain world peace in a more modernist fashion. I suppose we shall have to see how things go.

References

Kanich, C., Kreibich, C., Levchenko, K., Enright, B., Voelker, G.M., Paxson, V. and Savage, S., 2008, October. Spamalytics: An empirical analysis of spam marketing conversion. In Proceedings of the 15th ACM conference on Computer and communications security (pp. 3-14). ACM.

Levchenko, K., Pitsillidis, A., Chachra, N., Enright, B., Félegyházi, M., Grier, C., Halvorson, T., Kanich, C., Kreibich, C., Liu, H. and McCoy, D., 2011, May. Click trajectories: End-to-end analysis of the spam value chain. In Security and Privacy (SP), 2011 IEEE Symposium on (pp. 431-446). IEEE.