Computational Asymmetry
by Sebastian Benthall
I’ve written a paper with John Chuang about “Computational Asymmetry in Strategic Bayes Networks” to open a conversation about an economic and social issue: computational asymmetry. By this I mean the problem that some agents–people, corporations, nations–have access to more computational power than others.
We know that computational power is a scarce resource. Computing costs money, whether we buy our own hardware or rent it on the cloud. Should we be concerned with how this resource gets distributed in society?
One could argue that the market will lead to an efficient distribution of computing power, just like it leads to an efficient distribution of brown shoes or butter. But that argument only makes sense if computational power is not associated with externalities that would cause systematic market failure.
This isn’t likely. We know that information asymmetry can wreak havoc on market efficiency. Arguably, computational asymmetry is another form of information asymmetry: it allows some parties to get important information, faster. Or perhaps a better way to put it is that with more computing power, you can get more knowledge out of the information you already have.
In the paper linked above, we show that in some game theoretic situations with complex problems, more computationally powerful players can beat their opponents using only their superior silicon. Suppose that organizations use computing power to gain an economic advantage, and then use their winnings to invest in more computing power? You could see how this cycle would lead to massive inequality.
I don’t think this situation is far fetched. In fact, we may already be living it. Consider that computing power is carried not just by hardware availability, but by software and human capital. What are the most powerful forces in United States politics today? Is it Wall Street, with its bright minds and high-frequency traders? Or Silicon Valley, crunching data and rolling out code? Or technocratic elites in government? President Obama has a large team of software developers available to build whatever data mining tools he needs. Does Mexico have the same skills and tools at its disposal? Does Nigeria? There is asymmetry here. How will this power imbalance manifest itself in twenty years? Fifty years?
Henry Farrell (George Washington University) and Cosma Rohilla Shalizi (Carnegie-Mellon/The Santa Fe Institute) have recently put out a great paper about Cognitive Democracy, a political theory that grapple’s with society’s ability to solve complex problems. Following Hayek, who maintains that the market will efficiently solve complex economic problems, and Thaler and Sunstein, who believe that a paternalistic hierarchy can solve problems in a disinterested way, Farrell and Shaliza argue that a radical democracy can solve problems in a way that diffuses unequal power through people’s confrontation with other viewpoints. This requires that open argumentation and deliberation being an effective information-processing mechanism. They advocate for greater experimentation with democratic structure over the Internet, with the goal of eventually re-designing democratic institutions.
I love the concept of cognitive democracy and their approach. However, if their background assumptions are correct then computational asymmetry poses a problem. Politics is the negotiation of adversarial interests. If argumentation is a computational process (which I believe it is), then even a system of governance based on free speech and collective intelligence could be manipulated or overpowered by a computational titan. In such a system, whoever holds the greatest gigahertz gets a bigger piece of the derived social truth. As we plunge into a more computationally directed world, that should give us pause.
Is argumentation a computational process? Others would argue no. Of course, effective argumentation is as much an emotional as it is rational. Are emotions computations? Can computers manipulate our souls?
Great post. I look forward to reading the paper.
Thanks for pressing on this point. That question solicits new information in this discussion that’s pertinent to your skepticism. It’s a pleasure to bring it forward.
I would agree that argumentation is often emotional. [We can in the future use this proposition as an axiomatic ground for further discussion, unless a third party challenges it–part of the computational process of argument] But I disagree with the implied distinction between the “emotional” and the “rational”. Cite Antonio Damasio.
Are emotions computations? Though I’m not sure, I’d say they probably are. As a hypothesis: emotions are instantiated through neural computation, and so are computation.
But, more importantly, expressing emotions conveys valuable information to others in an argument. Your gracious doubtfulness signals that you don’t believe what I’m saying but are not averse to receiving more information. That’s very important for both the content of the argument and the channel used to convey it. I’d argue that this process of jointly reconciling the information available to us is still a computational one, in the same way that our process of communication is an informative one.
Computers manipulate our souls all the time! What are we doing right now if not mutual soul manipulation! Sometimes, I think you’re taunting me :)
I havent read Damasio (I will skim), but I cite Heidegger, Kahneman and others in that the rational and the emotional are at least distinct physiological processes within the brain. If emotions are computation, they operate with a Darwinian clock rate.^1
Emotions are also certainly inputs to argumentation. Thats not up for debate. But this discussion is about outputs, not inputs.
Is the important outcome of an argumentation the resulting rational beliefs of the participants, or their emotional states? Do we decide based on rational beliefs, or on our emotions? I (and tons of research) assert the latter, which even if emotions are the results of a computation, are computing across generations, and not rounds within a discourse.
1: For this to be actually true, we might need the Church-Turing-Deutsch principle, which may require quantum computing, as the Turing machine only covers the computable real numbers.
This is very interesting.
> Is the important outcome of an argumentation the resulting rational beliefs of the participants, or their emotional states? Do we decide based on rational beliefs, or on our emotions? I (and tons of research) assert the latter
Isn’t this a bit of a false dichotomy? I agree that an important outcome of argumentation is emotional resolution. But part of that process of emotional resolution can be the “rational” reevaluation of beliefs or values. Often what causes an argument is a disagreement over, say, how to accomplish a task. Certain shared background conditions–the desire to reach an actionable, communicable consensus, intellectual integrity–will make it unsatisfying to not arrive at a rational agreement.
So, while emotions may drive argumentation but motivating it and determine when it stops, a rational exchange can be a means by which the emotions are resolved.
Maybe argumentation a process of solving complex, multivariate emotional optimization problems.
> which even if emotions are the results of a computation, are computing across generations, and not rounds within a discourse.
I don’t understand this “across generations” comment. My emotions fluctuate rather dramatically over the course of a day, Mostly as a function of appetite, but sometimes as a function of argumentation. What do you mean when you say that emotions may compute across generations?
> Maybe argumentation a process of solving complex, multivariate emotional optimization problems.
This is already a much more believable characterization then what you (and Jenkins) started with. It also reminds me of this piece by Engelbart: http://www.dougengelbart.org/pubs/augment-133319.html (I wish I could find the transcript of the whole talk, its much better, although also a little hard to follow).
> I don’t understand this “across generations” comment. My emotions fluctuate rather dramatically over the course of a day, Mostly as a function of appetite, but sometimes as a function of argumentation. What do you mean when you say that emotions may compute across generations?
I mean that emotions are “computed” as survival responses to external stimuli that develop as a result of natural selection.
> Maybe argumentation a process of solving complex, multivariate emotional optimization problems.
Also, with this characterization, an interesting sub-problem is how much “emotional optimization” can really be conducted online.
> This is already a much more believable characterization then what you (and Jenkins) started with. It also reminds me of this piece by Engelbart: http://www.dougengelbart.org/pubs/augment-133319.html (I wish I could find the transcript of the whole talk, its much better, although also a little hard to follow).
That is neat. Very interested in his proposed solution of ‘Dynamic Knowledge Repositories’. It may be the missing link I was looking for to tie this line of thought in with the NLP project I was telling you about earlier. Thanks!
> I mean that emotions are “computed” as survival responses to external stimuli that develop as a result of natural selection.
I see. Here’s an alternative take: whatever their evolutionary history, our emotions are most importantly how we as agents evaluate the world. We get upset when there’s something wrong with ourselves or the world around us, are happy when we progress to something better. So, emotional engagement is part of the (possibly computational) day to day feedback mechanism that drives us.
Douglas Engelbart is still alive and lives in the South Bay. Last time I checked, he was still interested in pursuing these ideas. Ive also heard that he was an early friend of the I-School.
> For this to be actually true, we might need the Church-Turing-Deutsch principle, which may require quantum computing, as the Turing machine only covers the computable real numbers.
Hmm…I think Church-Turing-Deutsch is stronger than you need for emotions to be computational. If emotions reduce to discretized events (such as neural firing, or ‘external’ events that are detectable through neural computation) then you don’t need the full expressive power of physics to represent emotion.
Well, I believe there are also different types of neurotransmitters, etc. Anyway, we are clearly speculating outside of our disciplinary sweet spot now so lets let this particular thread languish…
Ok. This used to be my jam though (theories of functional mind-body reduction) so it’s a good thing to revisit for me. I’ll look around for it.
OK. I was pre-med/biochem in undergrad so Ill try to keep up. ;-)
Also, Im not taunting you… The soul bit was a rhetorical flourish. ;-)
I read the Jenkins piece on the plane yesterday. Honestly, I found it pretty weak, and even more importantly, essentially a rehash of what Helen Landemore and others have been saying for several years now, which I also find unconvincing, but at least presents a more formal analytic frame (see, for example, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1449594).
Complex problem solving as optimization over “rugged” topologies? For one thing, smart people generally not only have “more organized” topologies (with homogenous coverage), they are also faster climbers.
More importantly, complex problems are usually multivariate optimizations. For example, in Jenkins cited example of global warming, we are trying to balance economic growth with environmental impact. Moreover, different parties value and stand to benefit differently from each of the variables (ex. the developing world vs. the developed world).
In short, by ignoring the complex political realities of the problems they hope to solve, the resulting model is hopelessly naive. You can do better, Seb! Keep at it!
> I read the Jenkins piece on the plane yesterday. Honestly, I found it pretty weak, and even more importantly, essentially a rehash of what Helen Landemore and others have been saying for several years now, which I also find unconvincing, but at least presents a more formal analytic frame (see, for example, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1449594).
Thanks for the Landemore reference. I find it interesting that both Jenkins and Landemore reference the mathematical work of Hong and Page, which is probably the source to go to.
While I agree that it’s not a particularly original idea, I think it’s a bit uncharitable to say that the Jenkins piece is “just a rehash” of Landemore. The arguments re: Hayek, Sunstein, and the proposed research program for democratic decision-making structures all go beyond Landemore’s sketch.
> Complex problem solving as optimization over “rugged” topologies? For one thing, smart people generally not only have “more organized” topologies (with homogenous coverage), they are also faster climbers.
What’s wrong with the “rugged” topology metaphor? That’s a pretty common metaphor for optimization problems, I think.
Agreed about faster climbers. I think that’s what I was trying to get at with the computational asymmetry angle.
> More importantly, complex problems are usually multivariate optimizations. For example, in Jenkins cited example of global warming, we are trying to balance economic growth with environmental impact. Moreover, different parties value and stand to benefit differently from each of the variables (ex. the developing world vs. the developed world).
Ah, that’s a good point. To rehash:
* There may be multiple, in-commensurable goal functions in play.
* Not all participants in a democratic process are acting in genuine collective interest. Actors may try to game the system for their own individual goals and valuations.
I had to google it just now, but I think ‘multivariate optimization’ may mean something more basic than what you using it as, but I see what you mean by it.
I guess the right thing to do would be to go back to the Hong and Page work and see to what extent their thesis can or cannot be extended in these ways.
This is a place where algorithmic game theory could really come in handy. Cool!
Oh, man, this is sweet. Thanks for commenting, Tap!
Glad I could help. I found the Sunstein and Hayek references a bit hand-wavy, but maybe thats just me. Also see some of Landemores follow-up work which may hint at more of a practical agenda. Yup, I got the conventional usage of multivariate optimization wrong, but glad you got the intent.
You might also like this talk, which is at least tangentially related:
http://www.iacr.org/conferences/crypto2009/videos/08_Edward_Felten_-_Alice_and_Bob_go_to_Washington.html
this is super.
[…] written here about computational asymmetry in the economy. The idea is that when different agents are endowed with different capacity to […]