Digifesto

Tag: networks

causal inference in networks is hard

I am trying to make statistically valid inferences about the mechanisms underlying observational networked data and it is really hard.

Here’s what I’m up against:

  • Even though my data set is a complete ecologically valid data set representing a lot of real human communication over time, it (tautologically) leaves out everything that it leaves out. I can’t even count all the latent variables.
  • The best methods for detecting causal mechanism, the potential outcomes framework for Rubin model, depends on the assumption that different members of the sample don’t interfere. But I’m working with networked data. Everything interferes with everything else, at least indirectly. That’s why it’s a network.
  • Did I mention that I’m working with communications data? What’s interesting about human communication is that it’s not really generated at random at all. It’s very deliberately created by people acting more or less intelligently all the time. If the phenomenon I’m studying is not more complex than the models I’m using to study it, then there is something seriously wrong with the people I’m studying.

I think I can deal with the first point here by gracefully ignoring it. It may be true that any apparent causal effect in my data is spurious and due to a common latent cause upstream. It may be true that the variance in the data is largely due to exogenous factors. Fine. That’s noise. I’m looking for a reliable endogenous signal. If there isn’t something there that would suggest that my entire data set is epiphenomal. But I know it’s not. So there’s got to be something there.

For the second point, there are apparently sophisticated methods for extending the potential outcomes framework to handling peer effects. These are gnarly and though I figure I could work with them, I don’t think they are going to be what I need because I’m not really looking for a causal relationship like a statistical relationship between treatment and outcome. I’m not after in the first instance what might be called type causation. I’m rather trying to demonstrate cases of token causation where causation is literally the transfer of information from object to another. And then I’m trying to show regularity in this underlying kind of causation in a layer of abstraction over it.

The best angle I can come up with on this so far is to use emergent properties of the network like degree assortativity to sort through potential mathematically defined graph generation algorithms. These algorithms can act as alternative hypotheses, and the observed emergent properties can theoretically be used to compute the likelihood of the observed data given the generation methods. Then all I need is a prior over graph generation methods! It’s perfectly Bayesian! I wonder if it is at all feasible to execute on. I will try.

It’s not 100% clear how you can take an algorithmically defined process and turn that into a hypothesis about causal mechanisms. Theoretically, as long as a causal network has computable conditional dependencies it can be represented by and algorithm. I believe that any algorithm (in the Church/Turing sense) can be represented as a causal network. Can this be done elegantly, so that the corresponding causal network represents something like what we’d expect from the scientific theory on the matter? This is unclear because, again, Pearl’s causal networks are great at representing type causation but not as expressive of token causation among a large population of uniquely positioned, generatively produced stuff. Pearl is not good at modeling life, I think.

The strategic activity of the actors is a modeling challenge but I think this is actually where there is substantive potential in this kind of research. If effective strategic actors are working in a way that is observably different from naive actors in some way that’s measurable in aggregate behavior, that’s a solid empirical result! I have some hypotheses around this that I think are worth checking. For example, probably the success of an open source community depends in part on whether members of the community act in ways that successfully bring new members in. Strategies that cultivate new members are going to look different from strategies that exclude newcomers or try to maintain a superior status. Based on some preliminary results, it looks like this difference between successful open source projects and most other social networks is observable in the data.

turns out network backbone markets in the US are competitive after all

I’ve been depressed lately about the oligopolistic control of telecommunications for a while now. There’s the Web We’ve Lost; there’s Snowden leaks; there’s the end of net neutrality. I’ll admit a lot of my moodiness about this has been just that–moodiness. But it was moodiness tied to a particular narrative.

In this narrative, power is transmitted via flows of information. Media is, if not determinative of public opinion, determinative of how that opinion is acted up. Surveillance is also an information flow. Broadly, mid-20th century telecommunications enabled mass culture due to the uniformity of media. The Internet’s protocols allowed it to support a different kind of culture–a more participatory one. But monetization and consolidation of the infrastructure has resulted in a society that’s fragmented but more tightly controlled.

There is still hope of counteracting that trend at the software/application layer, which is part of the reason why I’m doing research on open source software production. One of my colleagues, Nick Doty, studies the governance of Internet Standards, which is another piece of the puzzle.

But if the networking infrastructure itself is centrally controlled, then all bets are off. Democracy, in the sense of decentralized power with checks and balances, would be undermined.

Yesterday I learned something new from Ashwin Mathew, another colleague who studies Internet governance at the level of network administration. The man is deep in the process of finishing up his dissertation, but he looked up from his laptop for long enough to tell me that the network backbone market is in fact highly competitive at the moment. Apparently, there was a lot of dark fiberoptic cable (“dark fiber“–meaning, no light’s going through it) laid during the first dot-com boom, which has been laying fallow and getting bought up by many different companies. Since there are many routes from A to B and excess capacity, this market is highly competitive.

Phew! So why the perception of oligopolistic control of networks? Because the consumer-facing telecom end-points ARE an oligopoly. Here there’s the last-mile problem. When wire has to be laid to every house, the economies of scale are such that it’s hard to have competitive markets. Enter Comcast etc.

I can rest easier now, because I think that this means there’s various engineering solutions to this (like AirJaldi networks? though I think those still aren’t last mile…; mesh networks?) as well as political solutions (like a local government running its last mile network as a public utility).

Ascendency and overhead in networked ecosystems

Ulanowicz (2000) proposes in information-theoretic terms several metrics for ecosystem health, where one models an ecosystem as a for example a trophic network. Principal among them ascendancy , which is a measure of the extent to which energy flows in the system are predictably structured weighted by the total energy of the system. He believes that systems tend towards greater ascendancy in expectation, and that this is predictive of ecological ‘succession’ (and to some extent ecological fitness). On the other hand, overhead, which is unpredictability (perhaps, inefficiency) in energy flows (“free energy”?), are important for the system’s resiliency towards external shocks.
ascendency
At least in the papers I’ve read so far, Ulanowicz is not mathematically specific about the mechanism that leads to greater ascendancy, though he sketches some explanations. Autocatalytic cycles within the network reinforce their own positive perturbations and mutations, drawing in resources from external sources, crowding out and competing with them. These cycles become agents in themselves, exerting what Ulanwicz suggests is Aristotelian final or formal causal power on the lower level components. In this way, freely floating energy is drawn into structures of increasing magnificence and complexity.

I’m reminded on Bataille’s The Accursed Share, in which he attempts to account for societal differences and the arc of human history through the use of its excess energy. “The sexual act is in time what the tiger is in space,” he says, insightfully. The tiger, as an apex predator, is flame that clings brilliantly to the less glamorous ecosystem that supports it. That is why we adore them. And yet, their existence is fragile, as it depends on both the efficiency and stability of the rest of its network. When its environment is disturbed, it is the first to suffer.
space tiger
Ulanowicz cites himself suggesting that a similar framework could be used to analyze computer networks. I have not read his account yet, though I anticipate several difficulties. He suggests that data flows in a computer network are analogous to energy flows within an ecosystem. That has intuitive appeal, but obscures the fact that some data is more valuable than others. A better analogy might be money as a substitute for energy. Or maybe there is a way to reduce both to a common currency, at least for modeling purposes.

Econophysics has been gaining steam, albeit controversially. Without knowing anything about it but based just on statistical hunches, I suspect that this comes down to using more complex models on the super duper complex phenomenon of the economy, and demonstrating their success there. In other words, I’m just guessing that the success of econophysics modeling is due to the greater degrees of freedom in the physics models compared to non-dynamic, structural equilibrium models. However, since ecology models the evolutionary dynamics of multiple competing agents (and systems of those agents), its possible that those models could capture quite a bit of what’s really going on and even be a source of strategic insight.

Indeed, economics already has a sense of stable versus unstable equilibria that resonate with the idea of stability of ecological succession. These ideas translate into game theoretic analysis as well. As we do more work with Strategic Bayesian Networks or other constructs to model equilibrium strategies in a networked, multi-agent system, I wonder if we can reproduce Ulanowicz’s results and use his ideas about ascendancy (which, I’ve got to say, are extraordinary and profound) to provide insight into the information economy.

I think that will require translating the ecosystem modeling into Judea Pearl’s framework for causal reasoning. Having been indoctrinated in Pearl’s framework in much of my training, I believe that it is general enough to subsume Ulanowicz’s results. But I have some doubt. In some of his later writings Ulanowicz refers explicitly to a “Hegelian dialectic” between order and disorder as a consequence of some of his theories, and between that and his insistence on his departure from mechanistic thinking over the course of his long career, I am worried that he may have transcended what it’s possible to do even with the modeling power of Bayesian networks. The question is: what then? It may be that once one’s work sublimates beyond our ability to model explicitly and intervene strategically, it becomes irrelevant. (I get the sense that in academia, Ulanwicz’s scientific philosophizing is a privilege reserved for someone tenured who late in their career is free to make his peace with the world in their own way) But reading his papers is so exhilarating to me. I’ve had no prior exposure to ecology before this, so his papers are packed with fresh ideas. So while I don’t know how to justify it to any of my mentors or colleagues, I think I just have to keep diving into it when I can, on the side.