Digifesto

Tag: teleology

scientific contexts

Recall:

  • For Helen Nissenbaum (contextual integrity theory):
    • a context is a social domain that is best characterized by its purpose. For example, a hospital’s purpose is to cure the sick and wounded.
    • a context also has certain historically given norms of information flow.
    • a violation of a norm of information flow in a given context is a potentially unethical privacy violation. This is an essentially conservative notion of privacy, which is balanced by the following consideration…
    • Whether or not a norm of information flow should change (given, say, a new technological affordance to do things in a very different way) can be evaluated by how well it serve the context’s purpose.
  • For Fred Dretske (Knowledge and the Flow of Information, 1983):
    • The appropriate definition of information is (roughly) just what it takes to know something. (More specifically: M carries information about X if it reliably transmits what it takes for a suitably equipped but otherwise ignorant observer to learn about X.)
  • Combining Nissenbaum and Dretske, we see that with an epistemic and naturalized understanding of information, contextual norms of information flow are inclusive of epistemic norms.
  • Consider scientific contexts. I want to use ‘science’ in the broadest possible (though archaic) sense of the intellectual and practical activity of study or coming to knowledge of any kind. “Science” from the Latin “scire”–to know. Or “Science” (capitalized) as the translated 19th Century German Wissenschaft.
    • A scientific context is one whose purpose is knowledge.
    • Specific issues of whose knowledge, knowledge about what, and to what end the knowledge is used will vary depending on the context.
    • As information flow is necessary for knowledge, the purpose of science, the norms of information flow within (and without) a scientific context, the integrity of scientific context will be especially sensitive to its norms of information flow.
  • An insight I owe to my colleague Michael Tschantz, in conversation, is that there are several open problems within contextual integrity theory:
    • How does one know what context one is in? Who decides that?
    • What happens at the boundary between contexts, for example when one context is embedded in another?
    • Are there ways for the purpose of a context to change (not just the norms within it)?
  • Proposal: One way of discovering what a science is is to trace its norms of information flow and to identify its purpose. A contrast between the norms and purpose of, for example, data science and ethnography, would be illustrative of both. One approach to this problem could be kind of qualitative research done by Edwin Hutchins on distributed cognition, which accepts a naturalized view of information (necessary for this framing) and then discovers information flows in a context through qualitative observation.

intersecting agencies and cybersecurity #RSAC

I recurring theme in my reading lately (such as, Beniger‘s The Control Revolution, Horkheimer‘s Eclipse of Reason, and Norbert Wiener’s Cybernetics work) is the problem of two ways of reconciling explanations of how-things-came-to-be:

  • Natural selection. Here a number of autonomous, uncoordinated agents with some exogenously given variability encounter obstacles that limit their reproduction or survival. The fittest survive. Adaptation is due to random exploration at the level of the exogenous specification of the agent, if at all. In unconstrained cases, randomness rules and there is no logic to reality.
  • Purpose. Here there is a teleological explanation based on a goal some agent has “in mind”. The goal is coupled with a controlling mechanism that influences or steers outcomes towards that goal. Adaptation is part of the endogenous process of agency itself.

Reconciling these two kinds of description is not easy. A point Beniger makes is that differences between social theories in the 20th century can be read as differences in the divisions of where one demarcates agents within a larger system.


This week at the RSA Conference, Amit Yoran, President of RSA, gave a keynote speech about the change in mindset of security professionals. Just the day before I had attended a talk on “Security Basics” to reacquaint myself with the field. In it, there was a lot of discussion of how a security professional needs to establish “the perimeter” of their organization’s network. In this framing, a network is like the nervous system of the macro-agent that is an organization. The security professional’s role is to preserve the integrity of the organization’s information systems. Even in this talk on “the basics”, the speaker acknowledged that a determined attacker will always get into your network because of the limitations of the affordances of defense, the economic incentives of attackers, and the constantly “evolving” nature of the technology. I was struck in particular by this speaker’s detachment from the arms race of cybersecurity. The goal-driven adversariality of the agents involved in cybersecurity was taken as a given; as a consequence, the system evolves through a process of natural selection. The role of the security professional is to adapt to an exogenously-given ecosystem of threats in a purposeful way.

Amit Yoran’s proposed escape from the “Dark Ages” of cybersecurity got away from this framing in at least one way. For Yoran, thinking about the perimeter is obsolete. Because the attacker will always be able to infiltrate, the emphasis must be on monitoring normal behavior within your organization–say, which resources are accessed and how often–and detecting deviance through pervasive surveillance and fast computing. Yoran’s vision replaces the “perimeter” with an all-seeing eye. The organization that one can protect is the organization that one can survey as if it was exogenously given, so that changes within it can be detected and audited.

We can speculate about how an organization’s members will feel about such pervasive monitoring and auditing of activity. The interests of the individual members of a (sociotechnical) organization, the interests of the organization as a whole, and the interests of sub-organizations within an organization can be either in accord or in conflict. An “adversary” within an organization can be conceived of as an agent within a supervening organization that acts against the latter’s interests. Like a cancer.

But viewing organizations purely hierarchically like this leaves something out. Just as human beings are capable of more complex, high-dimensional, and conflicted motivations than any one of the organs or cells in our bodies, so too should we expect the interests of organizations to be wide and perhaps beyond the understanding of anyone within it. That includes the executives or the security professionals, which RSA Conference blogger Tony Kontzer suggests should be increasingly one and the same. (What security professional would disagree?)

What if the evolution of cybersecurity results in the evolution of a new kind of agency?

As we start to think of new strategies for information-sharing between cybersecurity-interested organizations, we have to consider how agents supervene on other agents in possibly surprising ways. An evolutionary mechanism may be a part of the very mechanism of purposive control used by a super-agent. For example, an executive might have two competing security teams and reward them separately. A nation might have an enormous ecosystem of security companies within its perimeter (…) that it plays off of each other to improve the robustness of its internal economy, providing for it the way kombucha drinkers foster their own vibrant ecosystem of gut fauna.

Still stranger, we might discover ways that purposive agents intersect at the neuronal level, like Siamese twins. Indeed, this is what happens when two companies share generic networking infrastructure. Such mereological complexity is sure to affect the incentives of everyone involved.

Here’s the rub: every seam in the topology of agency, at every level of abstraction, is another potential vector of attack. If our understanding of the organizational agent becomes more complex as we abandon the idea of the organizational perimeter, that complexity provides new ways to infiltrate. Or, to put it in the Enlightened terms more aligned with Yoran’s vision, the complexity of the system with it multitudinous and intersecting purposive agents will become harder and harder to watch for infiltrators.

If a security-driven agent is driven by its need to predict and audit activity within itself, then those agents will let a level complexity within themselves that is bounded by their own capacity to compute. This point was driven home clearly by Dana Wolf’s excellent talk on Monday, “Security Enforcement (re)Explained”. She outlined several ways that the computationally difficult cybersecurity functions–such as anti-virus and firewall technology–are being moved to the Cloud, where elasticity of compute resources theoretically makes it easier to cope with these resource demands. I’m left wondering: does the end-game of cybersecurity come down to the market dynamics of computational asymmetry?

This blog post has been written for research purposes associated with the Center for Long-Term Cybersecurity.