Digifesto

Tag: computational asymmetry

prediction and computational complexity

To the extent that an agent is predictable, it must be:

  • observable, and
  • have a knowable internal structure

The first implies that the predictor has collected data emitted by the agent.

The second implies that the agent has internal structure and that the predictor has the capacity to represent the internal structure of the other agent.

In general, we can say that people do not have the capacity to explicitly represent other people very well. People are unpredictable to each other. This is what makes us free. When somebody is utterly predictable to us, their rigidity is a sign of weakness or stupidity. They are following a simple algorithm.

We are able to model the internal structure of worms with available computing power.

As we build more and more powerful predictive systems, we can ask: is our internal structure in principle knowable by this powerful machine?

This is different from the question of whether or not the predictive machine has data from which to draw inferences. Though of course the questions are related in their implications.

I’ve tried to make progress on modeling this with limited success. Spiros has just told me about binary decision diagrams which are a promising lead.

The link between computation asymmetry and openness

I want to jot something down while it is on my mind. It’s rather speculative, but may wind up being the theme of my thesis work.

I’ve written here about computational asymmetry in the economy. The idea is that when different agents are endowed with different capacity to compute (or are differently boundedly rational)), then that can become an extreme inequality (power law distributed, as is income) as computational power is stockpiled as a kind of capital accumulation.

Whereas a solution to unequal income is redistribution and a solution to unequal physical is regulation against violence, for computational asymmetry there is a simpler solution: “openness” in the products of computation. In particular, high quality data goods–data that is computationally rich (has more logical depth)–can be made available as public goods.

There are several challenges to this idea. One is the problem of funding. How do you encourage the production of costly public goods? The classic answer is state funding. Today we have another viable option, crowdfunding.

Another involves questions of security and privacy. Can a policy of ‘openness’ lead to problematic invasions of privacy? Viewing the problem in light of computational assymetry sheds light into this dynamic. Privacy should be a privilege of the disempowered, openness a requirement of the powerful.

In an ideal economy, agents are rewarded for their contribution to social welfare. For high quality data goods, openness leads to the maximum social welfare. So in theory, agents should be willingly adopting an open policy of their own volition. What has prevented them in the past is transactions costs and the problem of incurred risk. As institutions that reduce transaction costs and absorb risks get better, the remaining problems will be ones of regulation of noncompetitive practices.

Computational Asymmetry

I’ve written a paper with John Chuang about “Computational Asymmetry in Strategic Bayes Networks” to open a conversation about an economic and social issue: computational asymmetry. By this I mean the problem that some agents–people, corporations, nations–have access to more computational power than others.

We know that computational power is a scarce resource. Computing costs money, whether we buy our own hardware or rent it on the cloud. Should we be concerned with how this resource gets distributed in society?

One could argue that the market will lead to an efficient distribution of computing power, just like it leads to an efficient distribution of brown shoes or butter. But that argument only makes sense if computational power is not associated with externalities that would cause systematic market failure.

This isn’t likely. We know that information asymmetry can wreak havoc on market efficiency. Arguably, computational asymmetry is another form of information asymmetry: it allows some parties to get important information, faster. Or perhaps a better way to put it is that with more computing power, you can get more knowledge out of the information you already have.

In the paper linked above, we show that in some game theoretic situations with complex problems, more computationally powerful players can beat their opponents using only their superior silicon. Suppose that organizations use computing power to gain an economic advantage, and then use their winnings to invest in more computing power? You could see how this cycle would lead to massive inequality.

I don’t think this situation is far fetched. In fact, we may already be living it. Consider that computing power is carried not just by hardware availability, but by software and human capital. What are the most powerful forces in United States politics today? Is it Wall Street, with its bright minds and high-frequency traders? Or Silicon Valley, crunching data and rolling out code? Or technocratic elites in government? President Obama has a large team of software developers available to build whatever data mining tools he needs. Does Mexico have the same skills and tools at its disposal? Does Nigeria? There is asymmetry here. How will this power imbalance manifest itself in twenty years? Fifty years?

Henry Farrell (George Washington University) and Cosma Rohilla Shalizi (Carnegie-Mellon/The Santa Fe Institute) have recently put out a great paper about Cognitive Democracy, a political theory that grapple’s with society’s ability to solve complex problems. Following Hayek, who maintains that the market will efficiently solve complex economic problems, and Thaler and Sunstein, who believe that a paternalistic hierarchy can solve problems in a disinterested way, Farrell and Shaliza argue that a radical democracy can solve problems in a way that diffuses unequal power through people’s confrontation with other viewpoints. This requires that open argumentation and deliberation being an effective information-processing mechanism. They advocate for greater experimentation with democratic structure over the Internet, with the goal of eventually re-designing democratic institutions.

I love the concept of cognitive democracy and their approach. However, if their background assumptions are correct then computational asymmetry poses a problem. Politics is the negotiation of adversarial interests. If argumentation is a computational process (which I believe it is), then even a system of governance based on free speech and collective intelligence could be manipulated or overpowered by a computational titan. In such a system, whoever holds the greatest gigahertz gets a bigger piece of the derived social truth. As we plunge into a more computationally directed world, that should give us pause.