Digifesto

Category: cybersecurity

WannaCry as an example of the insecurity of legacy systems

CLTC’s Steve Weber and Betsy Cooper have written an Op-Ed about the recent WannaCry epidemic. The purpose of the article is clear: to argue that a possible future scenario CLTC developed in 2015, in which digital technologies become generally distrusted rather than trusted, is relevant and prescient. They then go on to elaborate on this scenario.

The problem with the Op-Ed is that the connection between WannaCry is spurious. Here’s how they make the connection:

The latest widespread ransomware attack, which has locked up computers in nearly 150 countries, has rightfully captured the world’s attention. But the focus shouldn’t be on the scale of the attack and the immediate harm it is causing, or even on the source of the software code that enabled it (a previous attack against the National Security Agency). What’s most important is that British doctors have reverted to pen and paper in the wake of the attacks. They’ve given up on insecure digital technologies in favor of secure but inconvenient analog ones.

This “back to analog” moment isn’t just a knee-jerk, stopgap reaction to a short-term problem. It’s a rational response to our increasingly insecure internet, and we are going to see more of it ahead.

If you look at the article that they link to from The Register, which is the only empirical evidence they use to make their case, it does indeed reference the use of pen and paper by doctors.

Doctors have been reduced to using pen and paper, and closing A&E to non-critical patients, amid the tech blackout. Ambulances have been redirected to other hospitals, and operations canceled.

There is a disconnect between what the article says and what Weber and Cooper are telling us. The article is quite clear that doctors are using pen and paper amid the tech blackout. Which is to say, because their computers are currently being locked up by ransomware, doctors are using pen and paper.

Does that mean that “They’ve given up on insecure digital technologies in favor of secure but inconvenient analog ones.”? No. It means that since they are waiting to be able to use their computers again, they have no other recourse but to use pen and paper. Does the evidence warrant the claim that “This “back to analog” moment isn’t just a knee-jerk, stopgap reaction to a short-term problem. It’s a rational response to our increasingly insecure internet, and we are going to see more of it ahead.” No, not at all.

In their eagerness to show the relevance of their scenario, Weber and Cooper rush say where the focus should be (on CLTC’s future scenario planning) that they ignore the specifics of WannaCry, most of which do not help their case. For example, there’s the issue that the vulnerability exploited by WannaCry had been publicly known for two months before the attack, and that Microsoft had already published a patch to the problem. The systems that were still vulnerability either did not apply the software update or were using an unsupported older version of Windows.

This paints a totally different picture of the problem than Weber and Cooper provide. It’s not that “new” internet infrastructure is insecure and “old” technologies are proven. Much of computing and the internet is already “old”. But there’s a life cycle to technology. “New” systems are more resilient (able to adapt to an attack or discovered vulnerability) and are smaller targets. Older legacy systems with a large installed based, like Windows 7, become more globally vulnerability if their weaknesses are discovered and not addressed. And if they are in widespread use, that presents a bigger target.

This isn’t just a problem for Windows. In this research paper, we show how similar principles are at work in the Python ecosystem. The riskiest projects are precisely those that are old, assumed to be secure, but no longer being actively maintained while the technical environment changes around them. The evidence of the WannaCry case further supports this view.

Loving Tetlock’s Superforecasting: The Art and Science of Prediction

I was a big fan of Philip Tetlock’s Expert Political Judgment (EPJ). I read it thoroughly; in fact a book review of it was my first academic publication. It was very influential on me.

EPJ is a book that is troubling to many political experts because it basically says that most so-called political expertise is bogus and that what isn’t bogus is fairly limited. It makes this argument with far more meticulous data collection and argumentation than I am able to do justice to here. I found it completely persuasive and inspiring. It wasn’t until I got to Berkeley that I met people who had vivid negative emotional reactions to this work. They seem to mainly have been political experts who do not having their expertise assessed in terms of its predictive power.

Superforecasting: The Art and Science of Prediction (2016) is a much more accessible book that summarizes the main points from EPJ and then discusses the results of Tetlock’s Good Judgment Project, which was his answer to an IARPA challenge in forecasting political events.

Much of the book is an interesting history of the United States Intelligence Community (IC) and the way its attitudes towards political forecasting have evolved. In particular, the shock of the failure of the predictions around Weapons of Mass Destruction that lead to the Iraq War were a direct cause of IARPA’s interest in forecasting and their funding of the Good Judgment Project despite the possibility that the project’s results would be politically challenging. IARPA comes out looking like a very interesting and intellectually honest organization solving real problems for the people of the United States.

Reading this has been timely for me because: (a) I’m now doing what could be broadly construed as “cybersecurity” work, professionally, (b) my funding is coming from U.S. military and intelligence organizations, and (c) the relationship between U.S. intelligence organizations and cybersecurity has been in the news a lot lately in a very politicized way because of the DNC hacking aftermath.

Since so much of Tetlock’s work is really just about applying mathematical statistics to the psychological and sociological problem of developing teams of forecasters, I see the root of it as the same mathematical theory one would use for any scientific inference. Cybersecurity research, to the extent that it uses sound scientific principles (which it must, since it’s all about the interaction between society, scientifically designed technology, and risk), is grounded in these same principles. And at its best the U.S. intelligence community lives up to this logic in its public service.

The needs of the intelligence community with respect to cybersecurity can be summed up in one word: rationality. Tetlock’s work is a wonderful empirical study in rationality that’s a must-read for anybody interested in cybersecurity policy today.

intersecting agencies and cybersecurity #RSAC

I recurring theme in my reading lately (such as, Beniger‘s The Control Revolution, Horkheimer‘s Eclipse of Reason, and Norbert Wiener’s Cybernetics work) is the problem of two ways of reconciling explanations of how-things-came-to-be:

  • Natural selection. Here a number of autonomous, uncoordinated agents with some exogenously given variability encounter obstacles that limit their reproduction or survival. The fittest survive. Adaptation is due to random exploration at the level of the exogenous specification of the agent, if at all. In unconstrained cases, randomness rules and there is no logic to reality.
  • Purpose. Here there is a teleological explanation based on a goal some agent has “in mind”. The goal is coupled with a controlling mechanism that influences or steers outcomes towards that goal. Adaptation is part of the endogenous process of agency itself.

Reconciling these two kinds of description is not easy. A point Beniger makes is that differences between social theories in the 20th century can be read as differences in the divisions of where one demarcates agents within a larger system.


This week at the RSA Conference, Amit Yoran, President of RSA, gave a keynote speech about the change in mindset of security professionals. Just the day before I had attended a talk on “Security Basics” to reacquaint myself with the field. In it, there was a lot of discussion of how a security professional needs to establish “the perimeter” of their organization’s network. In this framing, a network is like the nervous system of the macro-agent that is an organization. The security professional’s role is to preserve the integrity of the organization’s information systems. Even in this talk on “the basics”, the speaker acknowledged that a determined attacker will always get into your network because of the limitations of the affordances of defense, the economic incentives of attackers, and the constantly “evolving” nature of the technology. I was struck in particular by this speaker’s detachment from the arms race of cybersecurity. The goal-driven adversariality of the agents involved in cybersecurity was taken as a given; as a consequence, the system evolves through a process of natural selection. The role of the security professional is to adapt to an exogenously-given ecosystem of threats in a purposeful way.

Amit Yoran’s proposed escape from the “Dark Ages” of cybersecurity got away from this framing in at least one way. For Yoran, thinking about the perimeter is obsolete. Because the attacker will always be able to infiltrate, the emphasis must be on monitoring normal behavior within your organization–say, which resources are accessed and how often–and detecting deviance through pervasive surveillance and fast computing. Yoran’s vision replaces the “perimeter” with an all-seeing eye. The organization that one can protect is the organization that one can survey as if it was exogenously given, so that changes within it can be detected and audited.

We can speculate about how an organization’s members will feel about such pervasive monitoring and auditing of activity. The interests of the individual members of a (sociotechnical) organization, the interests of the organization as a whole, and the interests of sub-organizations within an organization can be either in accord or in conflict. An “adversary” within an organization can be conceived of as an agent within a supervening organization that acts against the latter’s interests. Like a cancer.

But viewing organizations purely hierarchically like this leaves something out. Just as human beings are capable of more complex, high-dimensional, and conflicted motivations than any one of the organs or cells in our bodies, so too should we expect the interests of organizations to be wide and perhaps beyond the understanding of anyone within it. That includes the executives or the security professionals, which RSA Conference blogger Tony Kontzer suggests should be increasingly one and the same. (What security professional would disagree?)

What if the evolution of cybersecurity results in the evolution of a new kind of agency?

As we start to think of new strategies for information-sharing between cybersecurity-interested organizations, we have to consider how agents supervene on other agents in possibly surprising ways. An evolutionary mechanism may be a part of the very mechanism of purposive control used by a super-agent. For example, an executive might have two competing security teams and reward them separately. A nation might have an enormous ecosystem of security companies within its perimeter (…) that it plays off of each other to improve the robustness of its internal economy, providing for it the way kombucha drinkers foster their own vibrant ecosystem of gut fauna.

Still stranger, we might discover ways that purposive agents intersect at the neuronal level, like Siamese twins. Indeed, this is what happens when two companies share generic networking infrastructure. Such mereological complexity is sure to affect the incentives of everyone involved.

Here’s the rub: every seam in the topology of agency, at every level of abstraction, is another potential vector of attack. If our understanding of the organizational agent becomes more complex as we abandon the idea of the organizational perimeter, that complexity provides new ways to infiltrate. Or, to put it in the Enlightened terms more aligned with Yoran’s vision, the complexity of the system with it multitudinous and intersecting purposive agents will become harder and harder to watch for infiltrators.

If a security-driven agent is driven by its need to predict and audit activity within itself, then those agents will let a level complexity within themselves that is bounded by their own capacity to compute. This point was driven home clearly by Dana Wolf’s excellent talk on Monday, “Security Enforcement (re)Explained”. She outlined several ways that the computationally difficult cybersecurity functions–such as anti-virus and firewall technology–are being moved to the Cloud, where elasticity of compute resources theoretically makes it easier to cope with these resource demands. I’m left wondering: does the end-game of cybersecurity come down to the market dynamics of computational asymmetry?

This blog post has been written for research purposes associated with the Center for Long-Term Cybersecurity.