Digifesto

Category: economics

Follow the money back to the media

I once cared passionately about the impact of money in politics. I’ve blogged about it here a lot. Long ago I campaigned for fair elections. I went to work at a place where I thought I could work on tools to promote government transparency and electoral reform. This presidential election, I got excited about Rootstrikers. I vocally supported Buddy Roemer. Of course, the impact of any of these groups is totally marginal, and my impact within them even more so. Over the summer, I volunteered at a Super PAC, partly to see if there was any way the system could be improved from the inside. I found nothing.

I give up. I don’t believe there’s a way to change the system. I’m going to stop complaining about it and just accept the fact that democracy is a means of balancing different streams of money and power, full stop.

There is silver lining to the cloud. The tools for tracking where campaign donations are coming from are getting better and better. MapLight, for example, seems to do great work. So now we can know which interests are represented in politics. We can sympathize with some and condemn others. We can cheer for our team. Great.

But something that’s often omitted in analysis of money in politics is: where does it go?

So far the most thorough report I’ve been able to find on this (read: first viable google hit) was this PBS News Hour. It breaks it down pretty much as you would expect. The money goes to:

  • Television ads. Since airtime is limited, this means that political ads were being aired very early on.
  • Political consultants who specialize in election tactics.
  • Paid canvassers, knocking door-to-door or making phone calls to engage voters.

Interesting that so much of the money flows to media outlets, who presumably raise prices for advertising when candidates are competing for it with deep pockets. So… the mainstream media benefits hugely from boundless campaign spending.

Come to think of it, it must be that the media benefits much more than politicians or donors from the current financing system. Why is that? A campaign is a zero-sum game. Financially backing a candidate is taking a risk on their loss, and in a tight race one is likely to face fierce competition from other donors. But the outlets that candidates compete over for airtime and the consultants who have “mastered” the political system get to absorb all that funding without needing any particular stake in the outcome of the election. (Once in office, can a politician afford to upset the media?)

Who else benefits from campaign spending? Maybe the telecom industry, since all the political messaging has to run over it.

Maybe this analysis has something to do with why generating political momentum around campaign finance reform is a grueling uphill battle. Because the more centralized and powerful a media outlet, the more it has to gain from expensive campaign battling. It can play gatekeeper and sell passage to the highest bidder.

Taking it one step farther: since the media, through its selection of news items, can heavily influence voters’ perception of candidates, it is in their power to calibrate their news in a way that necessitates further spending by candidates.

Suppose a candidate is popular enough to win an election by a landslide. It would be in the interests of media outlets to start portraying that candidate badly, highlighting their gaffes or declaring them to be weak or whatever else, to force the candidate to spend money on advertising to reshape the public perception of them.

What a racket.

Why federally funded software should be open source

Recently, open access to government funded research has gained attention and traction. Britain and Europe have both announced that they will make research they fund open access. In the United States, a community-driven effort has pushed a Whitehouse petition to the Obama administration for a similar policy. We may be experiencing a sea change.

Perhaps on the coattails of this movement, Open Source for America has launched a petition asking for a similar policy regarded federally funded software development: share all government-developed software under an open source license.

This is a really good idea.

Unfortunately, software development and the government IT procurement are so misunderstood that this is not likely to excite those who aren’t somehow directly by the issue. That is too bad, because every American stands to benefit from this sort of change. That makes it important for those of us who do understand to act.

I’ll try to illustrate why this is important with a story, or really a template of a story. This is a story told in countless cases of government software procurement:

ACRNM, a federal agency, has realized that its database management system and its user interface have not been updated since the late 90’s, because building it the last time was such a headache. It never really worked the way they wanted it, and the vendor who built it for them has since vanished off the face of the earth. Desperate and beleaguered, ACRNM finally gets the budget together to build a new system, and put out a bid.

Vendors that have navigated the prerequisite bureaucratic maze flock to this bid, knowing victory will be lucrative. Among them is FUBAR Enterprise Solutions. They know that whatever they build, they have a revenue stream for life. Not only does ACRNM have an enormous internal incentive to declare the new system a success to justify their budget, but they also have nobody to turn to for help with their software when it inevitably fails but FUBAR. FUBAR can continue extorting ACRNM for cash until ACRNM gives up, and the cycle continues.

What is wrong with this picture? Let’s count the problems:

  • FUBAR has ACRNM by the (pardon me, there’s really no other way to put this) balls. The term is vendor lock-in. The second ACRNM installs their system, FUBAR becomes a parasite on the government leeching taxpayer money. This is because the software is proprietary. No other company is legally allowed to fix or modify FUBAR’s proprietary system, so FUBAR faces no competition and so can charge through the nose. If the software were open source, ACRNM could turn to other contractors to repair their system, lowering total costs.
  • ACRNM has to do its work with worse software. Remember, this is a government agency that we pay taxes to for their services. With so much government activity boiling down to bureaucratic information processing, and so much innovation in software engineering and design, and so much budgetary pressure, you would think that the federal government would leap at technological innovation. But proprietary contracting causes the government to cripple itself at a tipping point.
  • Today, government agencies like ACRNM are wisening up and turning to open source solutions. But it’s a slow, slow process. This is partly because FUBAR and its buddy companies who, after so many years of this relationship with government, are now an entrenched lobby that will sow Fear, Uncertainty, and Doubt about open source alternatives if they can get away with it. In recent years, since open source has become more mainstream, these companies are admitting the viability of open source compatibility and mixed solutions. They see the writing on the wall. They will of course fight an open source purchasing mandate with everything they have.
  • Few governmental problems are unique. If ACRNM is paying for a new custom software solution, there likely many other agencies–at federal, state, or local level–with a similar problem. Civic Commons has already jumped on this opportunity by trying to facilitate technology reuse across city governments. If ACRNM invests in an open source solution, then other agencies can seek out that solution and adapt it to their needs, reducing government IT costs overall.
  • As we’ve discussed, open source software creates a competitive market for services. That makes an open source mandate a job creation program. Every new open technology is an opportunity for several small businesses to open. These are businesses that share fixed costs to market entry and add value through technologist consulting and custom development. Jobs customizing existing open source solutions can be well-paid with even an entry-level programming skill set, and are a good way to build a lasting career in the technology sector. Federal investment in open source software builds our national supply of technology skill faster than proprietary investment.
  • Lastly, but certainly not least, is the possible reuse of open source technology by the private sector. Just as federally funded research contributes to growth in America’s scientific industry, federal investment in software provides a foundation for stronger tech companies. Openness in both cases expands the impact of the funding.

So, to recap: if this sort of policy passes, the winners are government employees, taxpayers, entry-level workers with a minimum of technical skills, and the tech industry in general. The losers (in the short term) are those existing companies that have the federal government locked into custom proprietary software contracts.

I want to make a point clear: I am talking specifically about new software development in this post. Purchasing licenses for existing proprietary software is a different story.

Brian Carver, professor at UC Berkeley School of Information, has offered this clarification of what an open source mandate could look like:

  1. An unambiguous policy and awareness that all software created by
    federal employees as part of their job duties is not subject to copyright
    at all and is born in the public domain, and therefore not subject to any
    license terms at all, including a FOSS license.
  2. Given 1, the federal government should either just use github/bitbucket
    or set up a similar repository to share all such federal government
    software that is in the public domain.
  3. When the federal government contracts with developers for software,
    there should be an unambiguous policy that all such software must be
    licensed under a FOSS license unless subject to a specifically-requested
    exemption (national security, military, etc.)

A central election issue is the size and role of government in the economy. Politicians on the right advocate for smaller government and a strong private sector with competitive markets. Politicians on the left advocate for government’s active investment in the economy.

Proprietary government-developed software is the worst of both worlds: inefficient government spending to create parasitic, uncompetitive companies that don’t invest their technology back into the economy. An open source mandate would give us the best of both worlds: efficient government spending that shrinks government (by easing overhead) while investing in new technology and competitive businesses.

The movement for open access to government funded research is strong and winning victories around the world. Maybe we can do the same for government funded software development.

The Shame or Shine Lotto

Consider the following Massively Multiplier On-line Game:

  • The game is strictly opt in. Nobody is forced to play the game.
  • Upon joining, some set of personal details is tracked and saved by the game. Purchasing data, tax records, …hell, legal record, personal messages?
  • Once per day, N players are selected at random and the data available on them are released into the public domain.
  • Members can look up to see whether others are playing the game. In addition to identifying information, they can see what information a player has agreed to have tracked.

It’s the Shame or Shine Lotto! Every day, there is a chance you will be roasted or toasted for the information you’ve agreed to uncertainly share.

Would you play this game?

Computational Asymmetry

I’ve written a paper with John Chuang about “Computational Asymmetry in Strategic Bayes Networks” to open a conversation about an economic and social issue: computational asymmetry. By this I mean the problem that some agents–people, corporations, nations–have access to more computational power than others.

We know that computational power is a scarce resource. Computing costs money, whether we buy our own hardware or rent it on the cloud. Should we be concerned with how this resource gets distributed in society?

One could argue that the market will lead to an efficient distribution of computing power, just like it leads to an efficient distribution of brown shoes or butter. But that argument only makes sense if computational power is not associated with externalities that would cause systematic market failure.

This isn’t likely. We know that information asymmetry can wreak havoc on market efficiency. Arguably, computational asymmetry is another form of information asymmetry: it allows some parties to get important information, faster. Or perhaps a better way to put it is that with more computing power, you can get more knowledge out of the information you already have.

In the paper linked above, we show that in some game theoretic situations with complex problems, more computationally powerful players can beat their opponents using only their superior silicon. Suppose that organizations use computing power to gain an economic advantage, and then use their winnings to invest in more computing power? You could see how this cycle would lead to massive inequality.

I don’t think this situation is far fetched. In fact, we may already be living it. Consider that computing power is carried not just by hardware availability, but by software and human capital. What are the most powerful forces in United States politics today? Is it Wall Street, with its bright minds and high-frequency traders? Or Silicon Valley, crunching data and rolling out code? Or technocratic elites in government? President Obama has a large team of software developers available to build whatever data mining tools he needs. Does Mexico have the same skills and tools at its disposal? Does Nigeria? There is asymmetry here. How will this power imbalance manifest itself in twenty years? Fifty years?

Henry Farrell (George Washington University) and Cosma Rohilla Shalizi (Carnegie-Mellon/The Santa Fe Institute) have recently put out a great paper about Cognitive Democracy, a political theory that grapple’s with society’s ability to solve complex problems. Following Hayek, who maintains that the market will efficiently solve complex economic problems, and Thaler and Sunstein, who believe that a paternalistic hierarchy can solve problems in a disinterested way, Farrell and Shaliza argue that a radical democracy can solve problems in a way that diffuses unequal power through people’s confrontation with other viewpoints. This requires that open argumentation and deliberation being an effective information-processing mechanism. They advocate for greater experimentation with democratic structure over the Internet, with the goal of eventually re-designing democratic institutions.

I love the concept of cognitive democracy and their approach. However, if their background assumptions are correct then computational asymmetry poses a problem. Politics is the negotiation of adversarial interests. If argumentation is a computational process (which I believe it is), then even a system of governance based on free speech and collective intelligence could be manipulated or overpowered by a computational titan. In such a system, whoever holds the greatest gigahertz gets a bigger piece of the derived social truth. As we plunge into a more computationally directed world, that should give us pause.

Don’t use Venn diagrams like this

Today I saw this whitepaper by Esri about their use of open source software. It’s old, but still kept my attention.

There’s several reasons why this paper is interesting. One reason is that it reflects the trend of companies that once used FUD tactics around open source software to singing a soothing song of compatibilism. It makes an admirable effort to explain the differences between open source, proprietary software, and open standards to its enterprise client audience. That is the good news.

The bad news is that since this new compatibilism is just bending to market pressure after the rise of successful open source software complements, it lacks an understanding of why the open source development process has caused those market successes. Of course, proprietary companies have good reason to blur these lines, because otherwise they would need to acknowledge the existence of open source substitutes. In Esri’s case, that would mean products like the OpenGeo Suite.

I probably wouldn’t have written this post if it were not for this Venn diagram, which is presented with the caption A hybrid relationship:

I don’t think there is a way to interpret this diagram in a way that makes sense. It correctly identifies that Closed Source, Open Source, and Open Standards are different. But what do the overlapping regions represent? Presumabely they are meant to indicate that a system may both be open source and use open standards, or have open standards and be closed, or…be both open and closed?

It’s a subtle point but the semantics of set containment implied by the Venn diagram really don’t apply here. A system that’s a ‘hybrid’ between a closed and open software is not “both” closed and open the same way closed software that uses open standards is “both” closed and open. Rather, the hybrid system is just that, a hybrid, which means that its architecture is going to suffer tradeoffs as different components have different properties.

I don’t think that the author of this whitepaper was trying to deliberately obscure this idea. But I think that they didn’t know or care about it. That’s a problem, because it’s marketing material like this that clouds the picture about the value of open source. At a pointy-haired managerial level, one can answer the question “why aren’t you using more open source software” with a glib, “oh, we’re using a hybrid model, tailored to our needs.” But unless you actually understand what you’re talking about, your technical stack may still be full of buggy and unaccountable software, without you even knowing it.

Computation and Economic Risk

I’m excited to be working with Prof. John Chuang this semester on an independent study in what we’re calling “Economics of Data and Computation”. For me, it’s an opportunity to explore the literature in the area and hunt for answers to questions I have about how the information and computation shape the economy, and vice versa. As my friend and mentor Eddie Pickle would put it, we are looking for the “physics of data”–what rules does data obey? How can its flows be harnessed for energy and useful work?

A requirement for the study is that I blog weekly about our progress. If these topics interest you, I hope you will stay tuned and engage in conversation.

To get things going, I dipped a toe into computational finance. Since I have no background in this area, I googled it and discovered Peter Forsyth’s aptly titled Introduction to Computational Finance without Agonizing Pain. What I found there surprised me.

The first few chapters of Forsyth’s work involve the pricing of stock options. Stock options are agreements wherein the option owner has the option, not obligation, to sell a stock at a particular price at a future time. These can be valued by imagining them as part of a portfolio with the stock in question, and determining the price which would hedge out all the risk from the portfolio.

Since stock prices are dynamic, evaluating the price of a stock option requires quick adaptation to change. As a model for the changes in stock prices, Forsyth uses Brownian motion, in which a variable moves according to a combination of drift and noise. The accuracy of the estimation of a value of a stock option is going to depend on the accuracy of the estimated expected value of this random stock price.

How do you estimate the expected value of a stock price that is subject to a complex stochastic process, such as Brownian motion? Forsyth starts by recommending a Monte Carlo method. This involves running lots of randomized simulations based on the model of stock price fluctuation and averaging the result.

This is great, but there’s a catch: Monte Carlo methods are computationally expensive. Forsyth goes into detail warning about how to tune the parameters to account for the time it takes these Monte Carlo simulations to converge on a result. Basically, the more iterations of simulation, the more accurate the estimate will be.

This is a very promising early result for us, because it suggests a link between computational power and economic risk. Even when all the parameters of the model are known, deriving useful results from the model requires computation. So we can in principle derive a price of computation (or, at least, of iterations of Monte Carlo simulation) as a function of the risk aversion of the stock option trader.

Is that counter-intuitive? This would be consistent with some findings that high-frequency trading reduces market volatility. It also suggests a possible economic relationship between finance, insurance, and the cloud computing market.

One question I’d like to look into is to what extent computational power can be seen as a strategic advantage to adversaries–for example, in a stock trading situation–and what the limits of that power are. At some point, the effects of computation are limited by the amount of data one has to work with. But too much data without the computational means to process it is a waste.

See where this is going? I’m interested in hearing your thoughts.