metaphysics and politics

In almost any contemporary discussion of politics, today’s experts will tell you that metaphysics is irrelevant.

This is because we are discouraged today from taking a truly totalizing perspective–meaning, a perspective that attempts to comprehend the totality of what’s going on.

Academic work on politics is specialized. It focuses on a specific phenomenon, or issue, or site. This is partly due to the limits of what it is possible to work on responsibly. It is also partly due to the limitations of agency. A grander view of politics isn’t useful for any particular agent; they need only the perspective that best serves them. Blind spots are necessary for agency.

But universalist metaphysics is important for politics precisely because if there is a telos to politics, it is peace, and peace is a condition of the totality.

And while a situated agent may have no need for metaphysics because they are content with the ontology that suits them, situated agents cannot alone make any guarantees of peace.

In order for an agent to act effectively in the interest of total societal conditions, they require an ontology which is not confined by their situation, which will encode those habits of thought necessary for maintaining their situation as such.

What motivates the study of metaphysics then? A motivation is that it provides one with freedom from ones situation.

This freedom is a political accomplishment, and it also has political effects.

Ohm and Post: Privacy as threats, privacy as dignity

I’m reading side by side two widely divergent law review articles about privacy.

One is Robert Post‘s “The Social Foundations of Privacy: Community and Self in Common Law Tort” (1989) (link)

The other is Paul Ohm‘s “Sensitive Information” (2014) (link)

They are very notably different. Post’s article diverges sharply from the intellectual millieu I’m used to. It starts with an exposition of Goffman’s view of the personal self as being constituted by ceremonies and rituals of human relationships. Privacy tort law is, in Post’s view, about repairing tears in the social fabric. The closest thing to this that I have ever encountered is Fingarette’s book on Confucianism.

Ohm’s article is much more recent and is in large part a reaction to the Snowden leaks. It’s an attempt to provide an account of privacy that can limit the problems associated with massive state (and corporate?) data collection. It attempts to provide a legally informed account of what information is sensitive, and then suggests that threat modeling strategies from computer security can be adapted to the privacy context. Privacy can be protected by identifying and mitigated privacy threats.

As I get deeper into the literature on Privacy by Design, and observe how privacy-related situations play out in the world and in my own life, I’m struck by the adaptability and indifference of the social world to shifting technological infrastructural conditions. A minority of scholars and journalists track major changes in it, but for the most part the social fabric adapts. Most people, probably necessarily, have no idea what the technological infrastructure is doing and don’t care to know. It can be coopted, or not, into social ritual.

If the swell of scholarship and other public activity on this topic was the result of surprising revelations or socially disruptive technological innovations, these same discomforts have also created an opportunity for the less technologically focused to reclaim spaces for purely social authority, based on all the classic ways that social power and significance play out.

consequences of scale

Here’s some key things about an economy of control:

  • An economy of control is normally very stable. It’s punctuated equilibrium. But the mean size of disruptive events increases over time, because each of these events can cause a cascade through an ever increasingly complex system.
  • An economy of control has enormous inequalities of all kinds of scale. But there’s a kind of evenness to the inequality from an information theoretic perspective, because of a conservation of entropy principle.
  • An economy of control can be characterized adequately using third order cybernetics. It’s an unsolved research problem to determine whether third order cybernetics is reducible to second order cybernetics. There should totally be a big prize for the first person who figures this out. That prize is a very lucrative hedge fund.
  • An economy of control is, of course, characterized mainly by its titular irony: there is the minimum possible control necessary to maintain the system’s efficiency. It’s a totalizing economic model of freedom maximization.
  • Economics of control is to neoliberalism and computational social science what neoliberalism was to political liberalism and neoclassical economic theory.
  • The economy of control preserves privacy perfectly at equilibrium, barring externalities.
  • The economy of control internalizes all externalities in the long run.
  • In the economy of control, demand is anthropic.
  • In the economy of control, for any belief that needs to be shouted on television, there is a person who sincerely believes it who is willing to get paid to shout it. Journalism is replaced entirely by networks of trusted scholarship.
  • The economy of control is sociologically organized according to two diverging principles: the organizational evolutionary pressure familiar from structural functionalism, and entropy. It draws on Bataille’s theory of the general economy. But it borrows from Ulanowicz the possibility of life overcoming thermodynamics. So to speak.

Just brainstorming here.

what if computers don’t actually control anything important?

I’ve written a lot (here, informally) on the subject of computational control of society. I’m not the only one, of course. There has in the past few years been a growing fear that one day artificial intelligence might control everything. I’ve argued that this is akin to older fears that, under capitalism, instrumentality would run amok.

Recently, thinking a little more seriously about what’s implied by an economy of control, I’ve been coming around to a quite different conclusion. What if the general tendency of these algorithmic systems is not the enslavement of humanity but rather the opening up of freedom and opportunity? This is not a critical attitude and might be seen as a simple shilling for industrial powers, so let me pose the point slightly more controversially. What if the result of these systems is to provide so much freedom and opportunity that it undermines the structure that makes social action significant? The “control” of these systems could just be the result of our being exposed, at last, to our individual insignificance in the face of each other.

As a foil, I’ll refer again to Frank Pasquale’s The Black Box Society, which I’ve begun to read again at the prompting of Pasquale himself. It is a rare and wonderful thing for the author of a book you’ve written rude things about to write you and tell you you’ve misrepresented the work. So often I assume nobody’s actually reading what I write, making this a lonely vocation indeed. Now I know that at least somebody gives a damn.

In Chapter 3, Pasquale writes:

“The power to include, exclude, and rank [in search results] is the power to ensure which public impressions become permanent and which remain fleeting. That is why search services, social and not, are ‘must-have’ properties for advertisers as well as users. As such, they have made very deep inroads indeed into the sphere of cultural, economic, and political influence that was once dominated by broadcast networks, radio stations, and newspapers. But their dominance is so complete, and their technology so complex, that they have escaped pressures for transparency and accountability that kept traditional media answerable to the public.”

As a continuation of the “technics-out-of-control” meme, there’s an intuitive thrust to this argument. But looking at the literal meaning of the sentences, none of it is actually true!

Let’s look at some of the reasons why these claims are false:

  • There are multiple competing search engines, and switching costs are very low. There are Google and Bing and Duck Duck Go, but there’s also more specialized search engines for particular kinds of things. Literally every branded shopping website has a search engine that includes only what it chooses to include. This market pressure for search drives search engines generally to provide people with the answers they are looking for.
  • While there is a certain amount of curation that goes into search results, the famous early ranking logic which made large scale search possible used mainly data created as part of the content itself (hyperlinks in the case of Google’s PageRank) or usage (engagement in the case of Facebook’s EdgeRank). To the extent that these algorithms have changed, much of it has been because they have had to cave to public pressure, in the form of market pressure. Many of these changes are based on dynamic socially created data as well (such as spam flagging). Far from being manipulated by a secret powerful force, search engine results are always a dynamic, social accomplishment that is a reflection of the public.
  • Alternative media forms, such as broadcast radio, print journalism, cable television, storefront advertisting, and so on still exist and have an influence over people’s decisions. No single digital technology ensures anything! A new restaurant that opens up in a neighborhood is free to gain a local reputation in the old fashioned way. And then these same systems for ranking and search incentivize the discovery over these local gems by design. The information economy doesn’t waste opportunities like this!

So what’s the problem? If algorithms aren’t controlling society, but rather are facilitating its self-awareness, maybe these kinds of polemics are just way off base.

economy of control

We call it a “crisis” when the predictions of our trusted elites are violated in one way or another. We expect, for good reason, things to more or less continue as they are. They’ve evolved to be this way, haven’t they? The older the institution, the more robust to change it must be.

I’ve gotten comfortable in my short life with the global institutions that appeared to be the apex of societal organization. Under these conditions, I found James Beniger‘s work to be particularly appealing, as it predicts the growth of information processing apparati (some combination of information worker and information technology) as formerly independent components of society integrate. I’m of the class of people that benefits from this kind of centralization of control, so I was happy to believe that this was an inevitable outcome according to physical law.

Now I’m not so sure.

I am not sure I’ve really changed my mind fundamentally. This extreme Beniger view is too much like Nick Bostrom’s superintelligence argument in form, and I’ve already thought hard about why that argument is not good. That reasoning stopped at the point of noting how superintelligence “takeoff” is limited by data collection. But I did not go to the next and probably more important step, which is the problem of aleatoric uncertainty in a world with multiple agents. We’re far more likely to get into a situation with multi-polar large intelligences that are themselves fraught with principle-agent problems, because that’s actually the status quo.

I’ve been prodded to revisit The Black Box Society, which I’ve dealt with inadequately. Its beefier chapters deal with a lot of the specific economic and regulatory recent history of the information economy of the United States, which is a good complement to Beniger and a good resource for the study of competing intelligences within a single economy, though I find this data a but clouded by the polemical writing.

“Economy” is the key word here. Pure, Arendtian politics and technics have not blended easily, but what they’ve turned into is a self-regulatory system with structure and agency. More than that, the structure is for sale, and so is the agency. What is interesting about the information economy is, and I guess I’m trying to coin a phrase here, is that it is an economy of control. The “good” being produced, sold, and bought, is control.

There’s a lot of interesting research about information goods. But I’ve never heard of a “control good”. But this is what we are talking about when we talk about software, data collection, managerial labor, and the conflicts and compromises that it creates.

I have a few intuitions about where this goes, but not as many as I’d like. I think this is because the economy of control is quite messy and hard to reason about.

habitus and citizenship

Just a quick thought… So in Bourdieu’s Science of Science and Reflexivity, he describes the habitus of the scientist. Being a scientist demands a certain adherence to the rules of the scientific game, certain training, etc. He winds up constructing a sociological explanation for the epistemic authority of science. The rules of the game are the conditions for objectivity.

When I was working on a now defunct dissertation, I was comparing this formulation of science with a formulation of democracy and the way it depends on publics. Habermasian publics, Fraserian publics, you get the idea. Within this theory, what was once a robust theory of collective rationality as the basis for democracy has deteriorated under what might be broadly construed as “postmodern” critiques of this rationality. One could argue that pluralistic multiculturalism, not collective reason, became the primary ideology for American democracy in the past eight years.

Pretty sure this backfired with e.g. the Alt-Right.

So what now? I propose that those interested in functioning democracy reconsider the habitus of citizenship and how it can be maintained through the education system and other civic institutions. It’s a bit old-school. But if the Alt-Right wanted a reversion to historical authoritarian forms of Western governance, we may be getting there. Suppose history moves in a spiral. It might be best to try to move forward, not back.

Loving Tetlock’s Superforecasting: The Art and Science of Prediction

I was a big fan of Philip Tetlock’s Expert Political Judgment (EPJ). I read it thoroughly; in fact a book review of it was my first academic publication. It was very influential on me.

EPJ is a book that is troubling to many political experts because it basically says that most so-called political expertise is bogus and that what isn’t bogus is fairly limited. It makes this argument with far more meticulous data collection and argumentation than I am able to do justice to here. I found it completely persuasive and inspiring. It wasn’t until I got to Berkeley that I met people who had vivid negative emotional reactions to this work. They seem to mainly have been political experts who do not having their expertise assessed in terms of its predictive power.

Superforecasting: The Art and Science of Prediction (2016) is a much more accessible book that summarizes the main points from EPJ and then discusses the results of Tetlock’s Good Judgment Project, which was his answer to an IARPA challenge in forecasting political events.

Much of the book is an interesting history of the United States Intelligence Community (IC) and the way its attitudes towards political forecasting have evolved. In particular, the shock of the failure of the predictions around Weapons of Mass Destruction that lead to the Iraq War were a direct cause of IARPA’s interest in forecasting and their funding of the Good Judgment Project despite the possibility that the project’s results would be politically challenging. IARPA comes out looking like a very interesting and intellectually honest organization solving real problems for the people of the United States.

Reading this has been timely for me because: (a) I’m now doing what could be broadly construed as “cybersecurity” work, professionally, (b) my funding is coming from U.S. military and intelligence organizations, and (c) the relationship between U.S. intelligence organizations and cybersecurity has been in the news a lot lately in a very politicized way because of the DNC hacking aftermath.

Since so much of Tetlock’s work is really just about applying mathematical statistics to the psychological and sociological problem of developing teams of forecasters, I see the root of it as the same mathematical theory one would use for any scientific inference. Cybersecurity research, to the extent that it uses sound scientific principles (which it must, since it’s all about the interaction between society, scientifically designed technology, and risk), is grounded in these same principles. And at its best the U.S. intelligence community lives up to this logic in its public service.

The needs of the intelligence community with respect to cybersecurity can be summed up in one word: rationality. Tetlock’s work is a wonderful empirical study in rationality that’s a must-read for anybody interested in cybersecurity policy today.

notes about natural gas and energy policy

I’m interested in energy (in the sense of the economy and ecology of energy as it powers society) but know nothing about it.

I feel like the last time I really paid attention to energy, it was still a question of oil (and its industrial analog, Big Oil) and alternative, renewable energy.

But now energy production in the U.S. has given way from oil to natural gas. I asked a friend about why, and I’ve filled in a big gap in my understanding of What’s Going On. What I filled it in with might be wrong, but here’s what it is so far:

  • At some point natural gas became a viable alternative to oil because the energy companies discovered it was cheaper to collect natural gas than to drill for oil.
  • The use of natural gas for energy has less of a carbon footprint than oil does. That makes it environmentally friendly relative to the current regulatory environment.
  • The problem (there must be a problem) is that the natural gas collection process has lots of downsides. These downsides are mainly because the process is very messy, involving smashing into some pocket of natural gas under lots of rock and trying to collect the good stuff. Lots of weird gases go everywhere. That has downsides, including:
    • Making the areas where this is happening unlivable. Because it’s harder to breathe? Because the water can be set on fire? It’s terrible.
    • It releases a lot of methane into the environment, which may be as bad if not worse for climate change than carbon. Who knows how bad it really is? Unclear.
  • Here’s the point (totally unconfirmed): The shift from oil to natural gas as an energy source has been partly due to a public awareness and regulatory gap about the side effects. There’s now lots of political pressure and science around carbon. But methane? I thought that was an energy source (because of Mad Max Beyond Thunderdome). I guess I was wrong.
  • Meanwhile, OPEC and non-OPEC have teamed up to restrict oil sales to hike up oil prices. Sucks for energy consumers, but that’s actually good for the environment.
  • Also, in response to the apparent reversal of U.S. federal interest in renewable energy, philanthropy-plus-market has stepped in with Breakthrough Energy Ventures. Since venture capital investors with technical backgrounds, unlike the U.S. government, tend to be long on science, this is just great.
  • So what: The critical focus for those interested in the environment now should be on the environmental and social impact of natural gas production, as oil has been taken care of and heavy hitters are backing sustainable energy in a way that will fix the problem if it can truly be fixed. We just have to not boil the oceans and poison all the children before they can get to it.
  • /

      If that doesn’t work, I guess at the end of the day, there’s always pigs.

Protected: What’s going on?

This content is password protected. To view it please enter your password below: