Digifesto

Category: politics

Bostrom and Habermas: technical and political moralities, and the God’s eye view

An intriguing chapter that follows naturally from Nick Bostrom’s core argument is his discussion of machine ethics writ large. He asks: suppose one could install into an omnipotent machine ethical principles, trusting it with the future of humanity. What principles should we install?

What Bostrom accomplishes by positing his Superintelligence (which begins with something simply smarter than humans, and evolves over the course of the book into something that takes over the galaxy) is a return to what has been called “the God’s eye view”. Philosophers once attempted to define truth and morality according to perspective of an omnipotent–often both transcendent and immanent–god. Through the scope of his work, Bostrom has recovered some of these old themes. He does this not only through his discussion of Superintelligence (and positing its existence in other solar systems already) but also through his simulation arguments.

The way I see it, one thing I am doing by challenging the idea of an intelligence explosion and its resulting in a superintelligent singleton is problematizing this recovery of the God’s Eye view. If your future world is governed by many sovereign intelligent systems instead of just one, then ethics are something that have to emerge from political reality. There is something irreducibly difficult about interacting with other intelligences and it’s from this difficulty that we get values, not the other way around. This sort of thinking is much more like Habermas’s mature ethical philosophy.

I’ve written about how to apply Habermas to the design of networked publics that mediate political interactions between citizens. What I built and offer as toy example in that paper, @TheTweetserve, is simplistic but intended just as a proof of concept.

As I continue to read Bostrom, I expect a convergence on principles. “Coherent extrapolated volition” sounds a lot like a democratic governance structure with elected experts at first pass. The question of how to design a governance structure or institution that leverages artificial intelligence appropriately while legitimately serving its users motivates my dissertation research. My research so far has only scratched the surface of this problem.

the state and the household in Chinese antiquity

It’s worthwhile in comparison with Arendt’s discussion of Athenian democracy to consider the ancient Chinese alternative. In Alfred Huang’s commentary on the I Ching, we find this passage:

The ancient sages always applied the principle of managing a household to governing a country. In their view, a country was simply a big household. With the spirit of sincerity and mutual love, one is able to create a harmonious situation anywhere, in any circumstance. In his Analects, Confucius says,

From the loving example of one household,
A whole state becomes loving.
From the courteous manner of one household,
A whole state becomes courteous.

Comparing the history of Europe and the rise of capitalistic bureaucracy with the history of China, where bureaucracy is much older, is interesting. I have comparatively little knowledge of the latter, but it is often said that China does not have the same emphasis on individualism that you find in the West. Security is considered much more important than Freedom.

The reminder that the democratic values proposed by Arendt and Horkheimer are culturally situated is an important one, especially as Horkheimer claims that free burghers are capable of producing art that expresses universal needs.

developing a nuanced view on transparency

I’m a little late to the party, but I think I may at last be developing a nuanced view on transparency. This is a personal breakthrough about the importance of privacy that I owe largely to the education I’m getting at Berkeley’s School of Information.

When I was an undergrad, I also was a student activist around campaign finance reform. Money in politics was the root of all evil. We were told by our older, wiser activist mentors that we were supposed to lay the groundwork for our policy recommendation and then wait for journalists to expose a scandal. That way we could move in to reform.

Then I worked on projects involving open source, open government, open data, open science, etc. The goal of those activities is to make things more open/transparent.

My ideas about transparency as a political, organizational, and personal issue originated in those experiences with those movements and tactics.

There is a “radically open” wing of these movements which thinks that everything should be open. This has been debunked. The primary way to debunk this is to point out that less privileged groups often need privacy for reasons that more privileged advocates of openness have trouble understanding. Classic cases of this include women who are trying to evade stalkers.

This has been expanded to a general critique of “big data” practices. Data is collected from people who are less powerful than people that process that data and act on it. There has been a call to make the data processing practices more transparent to prevent discrimination.

A conclusion I have found it easy to draw until relatively recently is: ok, this is not so hard. What’s important is that we guarantee privacy for those with less power, and enforce transparency on those with more power so that they can be held accountable. Let’s call this “openness for accountability.” Proponents of this view are in my opinion very well-intended, motivated by values like justice, democracy, and equity. This tends to be the perspective of many journalists and open government types especially.

Openness for accountability is not a nuanced view on transparency.

Here are some examples of cases where an openness for accountability view can go wrong:

  • Arguably, the “Gawker Stalker” platform for reporting the location of celebrities was justified by an ‘opennes for accountability’ logic. Jimmy Kimmel’s browbeating of Emily Gould indicates how this can be a problem. Celebrity status is a form of power but also raises ones level of risk because there is a small percentage of the population that for unfathomable reasons goes crazy and threatens and even attacks people. There is a vicious cycle here. If one is perceived to be powerful, then people will feel more comfortable exposing and attacking that person, which increases their celebrity, increasing their perceived power.
  • There are good reasons to be concerned about stereotypes and representation of underprivileged groups. There are also cases where members of those groups do things that conform to those stereotypes. When these are behaviors that are ethically questionable or manipulative, it’s often important organizationally for somebody to know about them and act on them. But transparency about that information would feed the stereotypes that are being socially combated on a larger scale for equity reasons.
  • Members of powerful groups can have aesthetic taste and senses of humor that are offensive or even triggering to less powerful groups. More generally, different social groups will have different and sometimes mutually offensive senses of humor. A certain amount of public effort goes into regulating “good taste” and that is fine. But also, as is well known, art that is in good taste is often bland and fails to probe the depths of the human condition. Understanding the depths of the human condition is important for everybody but especially for powerful people who have to take more responsibility for other humans.
  • This one is based on anecdotal information from a close friend: one reason why Congress is so dysfunctional now is that it is so much more transparent. That transparency means that politicians have to be more wary of how they act so that they don’t alienate their constituencies. But bipartisan negotiation is exactly the sort of thing that alienates partisan constituencies.

If you asked me maybe two years ago, I wouldn’t have been able to come up with these cases. That was partly because of my positionality in society. Though I am a very privileged man, I still perceived myself as an outsider to important systems of power. I wanted to know more about what was going on inside important organizations and was frustrated by my lack of access to it. I was very idealistic about wanting a more fair society.

Now I am getting older, reading more, experiencing more. As I mature, people are trusting me with more sensitive information, and I am beginning to anticipate the kinds of positions I may have later in my career. I have begun to see how my best intentions for making the world a better place are at odds with my earlier belief in openness for accountability.

I’m not sure what to do with this realization. I put a lot of thought into my political beliefs and for a long time they have been oriented around ideas of transparency, openness, and equity. Now I’m starting to see the social necessity of power that maintains its privacy, unaccountable to the public. I’m starting to see how “Public Relations” is important work. A lot of what I had a kneejerk reaction against now makes more sense.

I am in many ways a slow learner. These ideas are not meant to impress anybody. I’m not a privacy scholar or expert. I expect these thoughts are obvious to those with less of an ideological background in this sort of thing. I’m writing this here because I see my current role as a graduate student as participating in the education system. Education requires a certain amount of openness because you can’t learn unless you have access to information and people who are willing to teach you from their experience, especially their mistakes and revisions.

I am also perhaps writing this now because, who knows, maybe one day I will be an unaccountable, secretive, powerful old man. Nobody would believe me if I said all this then.

The Facebook ethics problem is a political problem

So much has been said about the Facebook emotion contagion experiment. Perhaps everything has been said.

The problem with everything having been said is that by an large people’s ethical stances seem predetermined by their habitus.

By which I mean: most people don’t really care. People who care about what happens on the Internet care about it in whatever way is determined by their professional orientation on that matter. Obviously, some groups of people benefit from there being fewer socially imposed ethical restrictions on data scientific practice, either in an industrial or academic context. Others benefit from imposing those ethical restrictions, or cultivating public outrage on the matter.

If this is an ethical issue, what system of ethics are we prepared to use to evaluate it?

You could make an argument from, say, a utilitarian perspective, or a deontological perspective, or even a virtue ethics standpoint. Those are classic moves.

But nobody will listen to what a professionalized academic ethicist will say on the matter. If there’s anybody who does rigorous work on this, it’s probably somebody like Luciano Floridi. His work is great, in my opinion. But I haven’t found any other academics who work in, say, policy that embrace his thinking. I’d love to be proven wrong.

But since Floridi does serious work on information ethics, that’s mainly an inconvenience to pundits. Instead we get heat, not light.

If this process resolves into anything like policy change–either governmental or internally at Facebook–it will because of a process of agonistic politics. “Agonistic” here means fraught with conflicted interests. It may be redundant to modify ‘politics’ with ‘agonistic’ but it makes the point that the moves being made are strategic actions, aimed at gain for ones person or group, more than they are communicative ones, aimed at consensus.

Because e.g. Facebook keeps public discussion fragmented through its EdgeRank algorithm, which even in its well-documented public version is full of apparent political consequences and flaws, there is no way for conversation within the Facebook platform to result in consensus. It is not, as has been observed by others, a public. In a trivial sense, it’s not a public because the data isn’t public. The data is (sort of) private. That’s not a bad thing. It just means that Facebook shouldn’t be where you go to develop a political consensus that could legitimize power.

Twitter is a little better for this, because it’s actually public. Facebook has zero reason to care about the public consensus of people on Twitter though, because those people won’t organize a consumer boycott of Facebook, because they can only reach people that use Twitter.

Facebook is a great–perhaps the greatest–example of what Habermas calls the steering media. “Steering,” because it’s how powerful entities steer public opinion. For Habermas, the steering media control language and therefore culture. When ‘mass’ media control language, citizens no longer use language to form collective will.

For individualized ‘social’ media that is arranged into filter bubbles through relevance algorithms, language is similarly controlled. But rather than having just a single commanding voice, you have the opportunity for every voice to be expressed at once. Through homophily effects in network formation, what you’d expect to see are very intense clusters of extreme cultures that see themselves as ‘normal’ and don’t interact outside of their bubble.

The irony is that the critical left, who should be making these sorts of observations, is itself a bubble within this system of bubbles. Since critical leftism is enacted in commercialized social media which evolves around it, it becomes recuperated in the Situationist sense. Critical outrage is tapped for advertising revenue, which spurs more critical outrage.

The dependence of contemporary criticality on commercial social media for its own diffusion means that, ironically, none of them are able to just quit Facebook like everyone else who has figured out how much Facebook sucks.

It’s not a secret that decentralized communication systems are the solution to this sort of thing. Stanford’s Liberation Tech group captures this ideology rather well. There’s a lot of good work on censorship-resistant systems, distributed messaging systems, etc. For people who are citizens in the free world, many of these alternative communication platforms where we are spared from algorithmic control are very old. Some people still use IRC for chat. I’m a huge fan of mailing lists, myself. Email is the original on-line social media, and ones inbox is ones domain. Everyone who is posting their stuff to Facebook could be posting to a WordPress blog. WordPress, by the way, has a lovely user interface these days and keeps adding “social” features like “liking” and “following”. This goes largely unnoticed, which is too bad, because Automattic, the company the runs WordPress, is really not evil at all.

So there are plenty of solutions to Facebook being bad for manipulative and bad for democracy. Those solutions involve getting people off of Facebook and onto alternative platforms. That’s what a consumer boycott is. That’s how you get companies to stop doing bad stuff, if you don’t have regulatory power.

Obviously the real problem is that we don’t have a less politically problematic technology that does everything we want Facebook to do only not the bad stuff. There are a lot of unsolved technical accomplishments to getting that to work. I think I wrote a social media think piece about this once.

I think a really cool project that everybody who cares about this should be working on is designing and executing on building that alternative to Facebook. That’s a huge project. But just think about how great it would be if we could figure out how to fund, design, build, and market that. These are the big questions for political praxis in the 21st century.

metaphorical problems with logical solutions

There are polarizing discourses on the Internet about the following four dichotomies:

  • Public vs. Private (information)
  • (Social) Inclusivity vs. Exclusivity.
  • Open vs. Closed (systems, properties, communities).

Each of these pairings enlists certain metaphors and intuitions. Rarely are they precisely defined.

Due to their intuitive pull, it’s easy to draw certain naive associations. I certainly do. But how do they work together logically?

To what extent can we fill in other octants of this cube? Or is that way of modeling it too simplistic as well?

If privacy is about having contextual control over information flowing out of oneself, then that means that somebody must have the option of closing off some access to their information. To close off access is necessarily to exclude.

PRIVATE => ¬OPEN => ¬INCLUSIVE

But it has been argued that open sociotechnical systems exclude as well by being inhospitable to those with greater need for privacy.

OPEN => ¬PRIVATE => ¬INCLUSIVE

These conditionals limit the kinds of communities that can exist.

PRIVATE OPEN INCLUSIVE POSSIBLE?
T T T F
T T F F
T F T F
T F F T
F T T F
F T F T
F F T F
F F F T

Social inclusivity in sociotechnical systems is impossible. There is no such thing as a sociotechnical system that works for everybody.

There are only three kinds of systems: open systems, private systems, or systems that are neither open nor private. We can call the latter leaky systems.

These binary logical relations capture only the limiting properties of these systems. If there has ever been an open system, it is the Internet; but everyone knows that even the Internet isn’t truly open because of access issues.

The difference between a private system and a leaky system is participant’s ability to control how their data escapes the system.

But in this case, systems that we call ‘open’ are often private systems, since participants choose whether or not to put information into the open.

So is the only question whether and when information is disclosed vs. leaked?

the technical political spectrum?

Since the French Revolution, we have had the Left/Right divide in politics.

Probably seven or so years ago, some people got excited about thinking about a two-dimensional political spectrum. There were Economic and Social dimensions. You could be in one of four quadrants: Libertarian, Social Democrat, Totalitarian, or Conservative.

Technology is getting more political and politicized. Have we figured out the spectrum yet?

Because there’s been a lot of noise about their beef, lets assume as a first pass that O’Reilly and Morozov give us some sense of the space. The problem is that there’s a good chance the “debate” between them is giving off a lot more heat than light, so it’s not clear if there’s a substantive political difference.

Let me try to take a constructive crack at it. I don’t think I’m going to get it right, but I’m curious to know how much this resonates and if others would map things differently.

A two-dimensional representation of the continuum of technical politics, with unscientifically plotted representatives

A two-dimensional representation of the continuum of technical politics, with unscientifically plotted representatives

Some people think that “technology”, by which most people mean technology companies, should be replacing more and more of the functions of government. I think the peer progressives are in this camp, as are the institutionalized nudgers in the UK Conservative party, who would prefer to shrink the state. There’s a fair argument that the “open government” people are trying to shrink government by giving non-state actors the ability to provide services that the state might otherwise provide. Through free flow of information and greater connectivity, we can spur vibrancy in civil society and perfect the market.

Others think that the state needs to have a strong role in regulating technology companies to make sure they don’t abuse their power. There’s a lot of that going around in my department at UC Berkeley. These people see that democratic state as the best representative of citizen’s interests. The FTC and Congress need to help ensure, e.g., people’s privacy. Maybe Morozov is in here somewhere. Monopoly concentrations of technical power are threatening to the public interest; technical platforms should be decentralized and controlled so that politics is not overwhelmed by an illegitimate technocracy.

Another powerful group, the Copyright lobby, is economically threatened by new technology and so wants to restrict its use. Telecom companies would like to effectively meter flow of information. Maybe it’s a stretch, but perhaps we could include the military-industrial complex and its desire to instrument the Web for surveillance purposes in this camp as well. These groups tend to not want technology to change, or to tightly control that technology.

Then there’s the Free Software movement. And Stanford’s Liberation Technology folks, if I understand them correctly. And maybe Anonymous is in here somewhere. Pro-technology, generally skeptical of both state and corporate interests.

So maybe what’s going on is that we have a two-dimensional political space.

In one dimension, we have Centralization versus Decentralization. Richly interconnected platforms managed by an elite with tight arrangements for data sharing, versus a much more loosely connected set of networks where the lines of power are less clear.

In the other dimension, we have Unrestricted versus Controlled. Either the technical organizations should be free to persue their own interests, or they should be regulated by non- (or at least less) technical political forces, such as the state.

What do you think?

Bay Area Rationalists

There is an interesting thing happening. Let me just try to lay down some facts.

There are a number of organizations in the Bay Area right now up to related things.

  • Machine Intelligence Research Institute (MIRI). Researches the implications of machine intelligence on the world, especially the possibility of super-human general intelligences. Recently changed their name from the Singularity Institute due to the meaninglessness of the term Singularity. I interviewed their Executive Director (CEO?), Luke Meuhlhauser, a while back. (I followed up on some of the reasoning there with him here).
  • Center for Applied Rationality (CFAR). Runs workshops training people in rationality, applying cognitive science to life choices. Trying to transition from appearing to pitch a “world-view” to teaching a “martial art” (I’ve sat in on a couple of their meetings). They aim to grow out a large network of people practicing these skills, because they think it will make the world a better place.
  • Leverage Research. A think-tank with an elaborate plan to save the world. Their research puts a lot of emphasis on how to design and market ideologies. I’ve been told that they recently moved to the Bay Area to be closer to CFAR.

Some things seem to connect these groups. First, socially, they all seem to know each other (I just went to a party where a lot of members of each group were represented.) Second, the organizations seem to get the majority of their funding from roughly the same people–Peter Thiel, Luke Nosek, and Jaan Tallinn, all successful tech entrepreneurs turned investors with interest in stuff like transhumanism, the Singularity, and advancing rationality in society. They seem to be employing a considerable number of people to perform research on topics normally ignored in academia and spread an ideology and/or set of epistemic practices. Third, there seems to be a general social affiliation with LessWrong.com; I gather a lot of the members of this community originally networked on that site.

There’s a lot that’s interesting about what’s going on here. A network of startups, research institutions, and training/networking organizations is forming around a cluster of ideas: the psychological and technical advancement of humanity, being smarter, making machines smarter, being rational or making machines to be rational for us. It is as far as I can tell largely off the radar of “mainstream” academic thinking. As a network, it seems concerned with growing to gather into itself effective and connected people. But it’s not drawing from many established bases of effective and connected people (the academic establishment, the government establishment, the finance establishment, “old boys networks” per se, etc.) but rather is growing its own base of enthusiasts.

I’ve had a lot of conversations with people in this community now. Some, but not all, would compare what they are doing to the starting of a religion. I think that’s pretty accurate based on what I’ve seen so far. Where I’m from, we’ve always talked about Singularitarianism as “eschatology for nerds”. But here we have all these ideas–the Singularity, “catastrophic risk”, the intellectual and ethical demands of “science”, the potential of immortality through transhumanist medicine, etc.–really motivating people to get together, form a community, advance certain practices and investigations, and proselytize.

I guess what I’m saying is: I don’t think it’s just a joke any more. There is actually a religion starting up around this. Granted, I’m in California now and as far as I can tell there are like sixty religions out here I’ve never heard of (I chalk it up to the lack of population density and suburban sprawl). But this one has some monetary and intellectual umph behind it.

Personally, I find this whole gestalt both attractive and concerning. As you might imagine, diversity is not this group’s strong suit. And its intellectual milieu reflects its isolation from the academic mainstream in that it lacks the kind of checks and balances afforded by multidisciplinary politics. Rather, it appears to have more or less declared the superiority of its methodological and ideological assumptions to its satisfaction and convinced itself that it’s ahead of the game. Maybe that’s true, but in my own experience, that’s not how it really works. (I used to share most of the tenets of this rationalist ideology, but have deliberately exposed myself to a lot of other perspectives since then [I think that taking the Bayesian perspective seriously necessitates taking the search for new information very seriously]. Turns out I used to be wrong about a lot of things.)

So if I were to make a prediction, it would go like this. One of these things is going to happen:

  • This group is going to grow to become a powerful but insulated elite with an expanded network and increasingly esoteric practices. An orthodox cabal seizes power where they are able, and isolates itself into certain functional roles within society with a very high standard of living.
  • In order to remain consistent with its own extraordinarily high epistemic standards, this network starts to assimilate other perspectives and points of view in an inclusive way. In the process, it discovers humility, starts to adapt proactively and in a decentralized way, losing its coherence but perhaps becomes a general influence on the preexisting societal institutions rather than a new one.
  • Hybrid models. Priesthood/lay practitioners. Or denominational schism.

There is a good story here, somewhere. If I were a journalist, I would get in on this and publish something about it, just because there is such a great opportunity for sensationalist exploitation.

Pharmaceuticals, Patents, and the Media

I had an interesting conversation the other day with a health care professional. He was lamenting the relationship between doctors and pharmaceutical companies.

Pharmaceutical companies, he reported, put a lot of pressure on doctors to prescribe and sell drugs, giving them bonuses if they sell certain quotas. This provides an incentive for doctors to prescribe drugs that don’t cure patients. When you sell medicine and not health, why heal a patient?

I’ve long been skeptical about pharmaceutical companies for another reason: as an industry, they seem to be primarily responsible for use and abuse of the patent system. Big Pharma lobbies congress to keep patent laws strong, but then also games the patent system. For example, it’s common practice for pharmaceutical companies to make trivial changes to a drug formula in order to extend an existing patent past its normal legal term (14 years). The result is a de facto Sonny Bono law for patents.

The justification for strong patents is, of course, the problem of recouping fixed costs from research investment. So goes the argument: Pharmaceutical research is expensive, but pharmaceutical production is cheap. If firms can freely compete in the drug market, new firms will enter after a research phase and prices will drop so the original researching company makes no profit. Without that profit, they will never do the research in the first place.

This is a fair argument as far as it goes.

However, this economic argument provides a cover story that ignores other major parts of the pharmaceutical industry. Let’s talk about advertising. When Big Pharma puts out a $14 million dollar Super Bowl commercial, is that dipping into the research budget? Or is that part of a larger operating cost endured by these companies–the costs of making their brand a household name, of paying doctors to make subscriptions, and of lobbying for a congenial political climate?

A problem is that when pharmaceutical companies are not just researching and selling drugs but participating as juggernauts in the information economy, it’s very hard to tell how much of their revenue is necessary for innovation and how much funds bullying for unfair market advantage that hurts patients.

There are some possible solutions to this.

We can hope for real innovation in alternative business models for pharmaceutical research. Maybe we can advocate for more public funding through the university system. But that advocacy requires political will, which is difficult to foster without paid lobbyists or grassroots enthusiasm. Grassroots enthusiasm depends on the participating of the media.

Which gets us to the crux. Because if big media companies are cashing out from pharmaceutical advertising, what incentive do they have to disrupt the economic and political might of Big Pharma? It’s like the problem of relying on mass media to support campaign finance reform. Why would they shatter their own gravy train?

Lately, I’ve been seeing political problems more and more as aligned with centralization of the media. (Not a new observation by any means, but here I am late to the party). There are some major bottlenecks to worldwide information flow, and economic forces battle for these like mountain passes on ancient trade routes. Thankfully, this is also an area where there is a terrific amount of innovation and opportunity.

Here’s an interesting research question: how does one design a news dissemination network with mass appeal that both provides attractive content while minimizing potential for abuse by economic interests that are adversarial to the network users?

telecom security and technonationalism

EDIT: This excellent article by Farhad Manjoo has changed my mind or at least my attitude about this issue. Except for the last paragraph, which I believe is a convergent truth.

Reading this report about the U.S. blocking Huawei telecom components in government networks is a bit chilling.

The U.S. invests a lot of money into research of anti-censorship technology that would, among other things, disrupt the autocratic control China maintains over its own network infrastructure.

So from the perspective of the military, telecommunications technology is a battlefield.

I think rightly so. The opacity and centrality of telecommunications and the difficulty of tracing cyber-security breaches make these into risky decisions.

The Economist’s line is:

So what is needed most is an international effort to develop standards governing the integrity and security of telecoms networks. Sadly, the House Intelligence Committee isn’t smart enough to see this.

That’s smug and doesn’t address the real security concerns, or the immense difficulty of establishing international standards on telecom security, let alone guaranteeing the implementation of those standards.

However, an easier solution than waiting for agreement among a standards body would be to develop an open hardware specification for the components that met the security standards and a system for verifying them. That would encourage a free market on secure telecom hardware, which Huawei and others could participate in if they liked.

Follow the money back to the media

I once cared passionately about the impact of money in politics. I’ve blogged about it here a lot. Long ago I campaigned for fair elections. I went to work at a place where I thought I could work on tools to promote government transparency and electoral reform. This presidential election, I got excited about Rootstrikers. I vocally supported Buddy Roemer. Of course, the impact of any of these groups is totally marginal, and my impact within them even more so. Over the summer, I volunteered at a Super PAC, partly to see if there was any way the system could be improved from the inside. I found nothing.

I give up. I don’t believe there’s a way to change the system. I’m going to stop complaining about it and just accept the fact that democracy is a means of balancing different streams of money and power, full stop.

There is silver lining to the cloud. The tools for tracking where campaign donations are coming from are getting better and better. MapLight, for example, seems to do great work. So now we can know which interests are represented in politics. We can sympathize with some and condemn others. We can cheer for our team. Great.

But something that’s often omitted in analysis of money in politics is: where does it go?

So far the most thorough report I’ve been able to find on this (read: first viable google hit) was this PBS News Hour. It breaks it down pretty much as you would expect. The money goes to:

  • Television ads. Since airtime is limited, this means that political ads were being aired very early on.
  • Political consultants who specialize in election tactics.
  • Paid canvassers, knocking door-to-door or making phone calls to engage voters.

Interesting that so much of the money flows to media outlets, who presumably raise prices for advertising when candidates are competing for it with deep pockets. So… the mainstream media benefits hugely from boundless campaign spending.

Come to think of it, it must be that the media benefits much more than politicians or donors from the current financing system. Why is that? A campaign is a zero-sum game. Financially backing a candidate is taking a risk on their loss, and in a tight race one is likely to face fierce competition from other donors. But the outlets that candidates compete over for airtime and the consultants who have “mastered” the political system get to absorb all that funding without needing any particular stake in the outcome of the election. (Once in office, can a politician afford to upset the media?)

Who else benefits from campaign spending? Maybe the telecom industry, since all the political messaging has to run over it.

Maybe this analysis has something to do with why generating political momentum around campaign finance reform is a grueling uphill battle. Because the more centralized and powerful a media outlet, the more it has to gain from expensive campaign battling. It can play gatekeeper and sell passage to the highest bidder.

Taking it one step farther: since the media, through its selection of news items, can heavily influence voters’ perception of candidates, it is in their power to calibrate their news in a way that necessitates further spending by candidates.

Suppose a candidate is popular enough to win an election by a landslide. It would be in the interests of media outlets to start portraying that candidate badly, highlighting their gaffes or declaring them to be weak or whatever else, to force the candidate to spend money on advertising to reshape the public perception of them.

What a racket.

Follow

Get every new post delivered to your Inbox.

Join 1,033 other followers