Digifesto

intersecting agencies and cybersecurity #RSAC

I recurring theme in my reading lately (such as, Beniger‘s The Control Revolution, Horkheimer‘s Eclipse of Reason, and Norbert Wiener’s Cybernetics work) is the problem of two ways of reconciling explanations of how-things-came-to-be:

  • Natural selection. Here a number of autonomous, uncoordinated agents with some exogenously given variability encounter obstacles that limit their reproduction or survival. The fittest survive. Adaptation is due to random exploration at the level of the exogenous specification of the agent, if at all. In unconstrained cases, randomness rules and there is no logic to reality.
  • Purpose. Here there is a teleological explanation based on a goal some agent has “in mind”. The goal is coupled with a controlling mechanism that influences or steers outcomes towards that goal. Adaptation is part of the endogenous process of agency itself.

Reconciling these two kinds of description is not easy. A point Beniger makes is that differences between social theories in the 20th century can be read as differences in the divisions of where one demarcates agents within a larger system.


This week at the RSA Conference, Amit Yoran, President of RSA, gave a keynote speech about the change in mindset of security professionals. Just the day before I had attended a talk on “Security Basics” to reacquaint myself with the field. In it, there was a lot of discussion of how a security professional needs to establish “the perimeter” of their organization’s network. In this framing, a network is like the nervous system of the macro-agent that is an organization. The security professional’s role is to preserve the integrity of the organization’s information systems. Even in this talk on “the basics”, the speaker acknowledged that a determined attacker will always get into your network because of the limitations of the affordances of defense, the economic incentives of attackers, and the constantly “evolving” nature of the technology. I was struck in particular by this speaker’s detachment from the arms race of cybersecurity. The goal-driven adversariality of the agents involved in cybersecurity was taken as a given; as a consequence, the system evolves through a process of natural selection. The role of the security professional is to adapt to an exogenously-given ecosystem of threats in a purposeful way.

Amit Yoran’s proposed escape from the “Dark Ages” of cybersecurity got away from this framing in at least one way. For Yoran, thinking about the perimeter is obsolete. Because the attacker will always be able to infiltrate, the emphasis must be on monitoring normal behavior within your organization–say, which resources are accessed and how often–and detecting deviance through pervasive surveillance and fast computing. Yoran’s vision replaces the “perimeter” with an all-seeing eye. The organization that one can protect is the organization that one can survey as if it was exogenously given, so that changes within it can be detected and audited.

We can speculate about how an organization’s members will feel about such pervasive monitoring and auditing of activity. The interests of the individual members of a (sociotechnical) organization, the interests of the organization as a whole, and the interests of sub-organizations within an organization can be either in accord or in conflict. An “adversary” within an organization can be conceived of as an agent within a supervening organization that acts against the latter’s interests. Like a cancer.

But viewing organizations purely hierarchically like this leaves something out. Just as human beings are capable of more complex, high-dimensional, and conflicted motivations than any one of the organs or cells in our bodies, so too should we expect the interests of organizations to be wide and perhaps beyond the understanding of anyone within it. That includes the executives or the security professionals, which RSA Conference blogger Tony Kontzer suggests should be increasingly one and the same. (What security professional would disagree?)

What if the evolution of cybersecurity results in the evolution of a new kind of agency?

As we start to think of new strategies for information-sharing between cybersecurity-interested organizations, we have to consider how agents supervene on other agents in possibly surprising ways. An evolutionary mechanism may be a part of the very mechanism of purposive control used by a super-agent. For example, an executive might have two competing security teams and reward them separately. A nation might have an enormous ecosystem of security companies within its perimeter (…) that it plays off of each other to improve the robustness of its internal economy, providing for it the way kombucha drinkers foster their own vibrant ecosystem of gut fauna.

Still stranger, we might discover ways that purposive agents intersect at the neuronal level, like Siamese twins. Indeed, this is what happens when two companies share generic networking infrastructure. Such mereological complexity is sure to affect the incentives of everyone involved.

Here’s the rub: every seam in the topology of agency, at every level of abstraction, is another potential vector of attack. If our understanding of the organizational agent becomes more complex as we abandon the idea of the organizational perimeter, that complexity provides new ways to infiltrate. Or, to put it in the Enlightened terms more aligned with Yoran’s vision, the complexity of the system with it multitudinous and intersecting purposive agents will become harder and harder to watch for infiltrators.

If a security-driven agent is driven by its need to predict and audit activity within itself, then those agents will let a level complexity within themselves that is bounded by their own capacity to compute. This point was driven home clearly by Dana Wolf’s excellent talk on Monday, “Security Enforcement (re)Explained”. She outlined several ways that the computationally difficult cybersecurity functions–such as anti-virus and firewall technology–are being moved to the Cloud, where elasticity of compute resources theoretically makes it easier to cope with these resource demands. I’m left wondering: does the end-game of cybersecurity come down to the market dynamics of computational asymmetry?

This blog post has been written for research purposes associated with the Center for Long-Term Cybersecurity.

Beniger on anomie and technophobia

The School of Information Classics group has moved on to a new book: James Beniger’s 1986 The Control Revolution: Technological and Economic Origins of the Information Society. I’m just a few chapters in but already it is a lucid and compelling account of how the societal transformations due to information technology that are announced bewilderingly every decade are an extension of a process that began in the Industrial Revolution and just has not stopped.

It’s a dense book with a lot of interesting material in it. One early section discusses Durkheim’s ideas about the division of labor and its effect on society.

In a nutshell, the argument is that with industrialization, barriers to transportation and communication break down and local markets merge into national and global markets. This induces cycles of market disruption where because producers and consumers cannot communicate directly, producers need to “trust to chance” by embracing a potentially limitless market. This creates and unregulated economy prone to crisis. This sounds a little like venture capital fueled Silicon Valley.

The consequence of greater specialization and division of labor is a greater need for communication between the specialized components of society. This is the problem of integration, and it affects both the material and the social. The specifically, the magnitude and complexity of material flows result in a sharpening division of labor. When properly integrated, the different ‘organs’ of society gain in social solidarity. But if communication between the organs is insufficient, then the result is a pathological breakdown of norms and sense of social purpose: anomie.

The state of anomie is impossible wherever solidary organs are sufficiently in contact or sufficiently prolonged. In effect, being continguous, they are quickly warned, in each circumstance, of the need which they have of one another, and, consequently, they have a lively and continuous sentiment of their mutual dependence… But, on the contrary, if some opaque environment is interposed, then only stimuli of a certain intensity can be communicated from one organ to another. Relations, being rare, are not repeated enough to be determined; each time there ensues new groping. The lines of passage taken by the streams of movement cannot deepen because the streams themselves are too intermittent. If some rules do come to constitute them, they are, however, general and vague.

An interesting question is to what extent Beniger’s thinking about the control revolution extend to today and the future. An interesting sub-question is to what extent Durkheim’s thinking is relevant today or in the future. I’ll hazard a guess that’s informed partly by Adam Elkus’s interesting thoughts about pervasive information asymmetry.

An issue of increasing significance as communication technology improves is that the bottlenecks to communication become less technological and more about our limitations as human beings to sense, process, and emit information. These cognitive limitations are being overwhelmed by the technologically enabled access to information. Meanwhile, there is a division of labor between those that do the intellectually demanding work of creating and maintaining technology and those that do the intellectually demanding work of creating and maintaining cultural artifacts. As intellectual work demands the specialization of limited cognitive resources, this results in conflicts of professional identity due to anomie.

Long story short: Anomie is why academic politics are so bad. It’s also why conferences specializing in different intellectual functions can harbor a kind of latent animosity towards each other.