## Category: economics

### Artisanal production, productivity and automation, economic engines

I’m continuing to read Moretti’s The new geography of jobs (2012). Except for the occasional gushing over the revolutionary-ness of some new payments startup, a symptom no doubt of being so close to Silicon Valley, it continues to be an enlightening and measured read on economic change.

There are a number of useful arguments and ideas from the book, which are probably sourced more generally from economics, which I’ll outline here, with my comments:

Local, artisanal production can never substitute for large-scale manufacturing. Moretti argues that while in many places in the United States local artisinal production has cropped up, it will never replace the work done by large-scale production. Why? Because by definition, local artisinal production is (a) geographically local, and therefore unable to scale beyond a certain region, and (b) defined in part by its uniqueness, differentiating it from mainstream products. In other words, if your local small-batch shop grows to the point where it competes with large-scale production, it is no longer local and small-batch.

Interestingly, this argument about production scaling echoes work on empirical heavy tail distributions in social and economic phenomena. A world where small-scale production constituted most of production would have an exponentially bounded distribution of firm productivity. The world doesn’t look that way, and so we have very very big companies, and many many small companies, and they coexist.

Higher labor productivity in a sector results in both a richer society and fewer jobs in that sector. Productivity is how much a person’s labor produces. The idea here is that when labor productivity increases, the firm that hires those laborers needs fewer people working to satisfy its demand. But those people will be paid more, because their labor is worth more to the firm.

I think Moretti is hand-waving a bit when he argues that a society only gets richer through increased labor productivity. I don’t follow it exactly.

But I do find it interesting that Moretti calls “increases in productivity” what many others would call “automation”. Several related phenomena are viewed critically in the popular discourse on job automation: more automation causes people to lose jobs; more automation causes some people to get richer (they are higher paid); this means there is a perhaps pernicious link between automation and inequality. One aspect of this is that automation is good for capitalists. But another aspect of this is that automation is good for lucky laborers whose productivity and earnings increase as a result of automation. It’s a more nuanced story than one that is only about job loss.

The economic engine of an economy is what brings in money, it need not be the largest sector of the economy. The idea here is that for a particular (local) economy, the economic engine of that economy will be what pulls in money from outside. Moretti argues that the economic engine must be a “trade sector”, meaning a sector that trades (sells) its goods beyond its borders. It is the workers in this trade-sector economic engine that then spend their income on the “non-trade” sector of local services, which includes schoolteachers, hairdressers, personal trainers, doctors, lawyers, etc. Moretti’s book is largely about how the innovation sector is the new economic engine of many American economies.

One thing that comes to mind reading this point is that not all economic engines are engaged in commercial trade. I’m thinking about Washington, DC, and the surrounding area; the economic engine there is obviously the federal government. Another strange kind of economic engine are top-tier research universities, like Carnegie Mellon or UC Berkeley. Top-tier research universities, unlike many other forms of educational institutions, are constantly selling their degrees to foreign students. This means that they can serve as an economic engine.

Overall, Moretti’s book is a useful guide to economic geography, one that clarifies the economic causes of a number of political tensions that are often discussed in a more heated and, to me, less useful way.

References

Moretti, Enrico. The new geography of jobs. Houghton Mifflin Harcourt, 2012.

### Appealing economic determinism (Moretti)

I’ve start reading Enrico Moretti’s The New Geography of Jobs and finding it very clear and persuasive (though I’m not far in).

Moretti is taking up the major theme of What The Hell Is Happening To The United States, which is being addressed by so many from different angles. But whereas many writers seem to have an agenda–e.g., Noble advocating for political reform regulating algorithms; Deenan arguing for return to traditional community values in some sense; etc.–or to focus on particularly scandalous or dramatic aspects of changing political winds–such as Gilman’s work on plutocratic insurgency and collapsing racial liberalism–Moretti is doing economic geography showing how long term economic trends are shaping the distribution of prosperity within the U.S.

From the introduction, it looks like there are a few notable points.

The first is about what Moretti calls the Great Divergence, which has been going on since the 1980’s. This is the decline of U.S. manufacturing as jobs moved from Detroit, Michegan to Shenzhen, Guangdong, paired with the rise of an innovation economy where the U.S. takes the lead in high-tech and creative work. The needs of the high-tech industry–high-skilled workers, who may often be educated immigrants–changes the demographics of the innovation hubs and results in the political polarization we’re seeing on the national stage. This is an account of the economic base determining the cultural superstructure which is so fraught right now, and exactly what I was getting at yesterday with my rant yesterday about the politics of business.

The second major point Moretti makes which is probably understated in more polemical accounts of the U.S. political economy is the multiplier effect of high-skilled jobs in innovation hubs. Moretti argues that every high-paid innovation job (like software engineer or scientist) results in four other jobs in the same city. These other jobs are in service sectors that are by their nature local and not able to be exported. The consequence is that the innovation economy does not, contrary to its greatest skeptics, only benefit the wealthy minority of innovators to the ruin of the working class. However, it does move the location of working class prosperity into the same urban centers where the innovating class is.

This gives one explanation for why the backlash against Obama-era economic policies was such a shock to the coastal elites. In the locations where the “winners” of the innovation economy were gathered, there was also growth in the service economy which by objective measures increased the prosperity of the working class in those cities. The problem was the neglected working class in those other locations, who felt left behind and struck back against the changes.

A consequence of this line of reasoning is that arguments about increasing political tribalism are really a red herring. Social tribes on the Internet are a consequence, not a cause, of divisions that come from material conditions of economy and geography.

Moretti even appears to have a constructive solution in mind. He argues that there are “three Americas”: the rich innovation hubs, the poor former manufacturing centers, and mid-sized cities that have not yet gone either way. His recipe for economic success in these middle cities is attracting high-skilled workers who are a kind of keystone species for prosperous economic ecosystems.

References

Deneen, Patrick J. Why Liberalism Failed. Yale University Press, 2018.

Gilman, Nils. “The twin insurgency.” American Interest 15 (2014): 3-11.

Gilman, Nils. “The Collapse of Racial Liberalism.” The American Interest (2018).

Moretti, Enrico. The new geography of jobs. Houghton Mifflin Harcourt, 2012.

Noble, Safiya Umoja. Algorithms of Oppression: How search engines reinforce racism. NYU Press, 2018.

This post is an attempt to articulate something that’s on the tip of my tongue, so bear with me.

Fraser has made the point that the politics of recognition and the politics of distribution are not the same. In her view, the conflict in the U.S. over recognition (i.e., or women, racial minorities, LGBTQ, etc. on the progressive side, and on the straight white male ‘majority’ on the reactionary side) has overshadowed the politics of distribution, which has been at a steady neoliberal status quo for some time.

First, it’s worth pointing out that in between these two political contests is a politics of representation, which may be more to the point. The claim here is that if a particular group is represented within a powerful organization–say, the government, or within a company with a lot of power such as a major financial institution or tech company–then that organization will use its power in a way that is responsive to the needs of the represented group.

Politics of representation are the link between recognition and distribution: the idea is that if “we” recognize a certain group, then through democratic or social processes members of that group will be lifted into positions of representative power, which then will lead to (re)distribution towards that group in the longer run.

I believe this is the implicit theory of social change at the heart of a lot of democratish movements today. It’s an interesting theory in part because it doesn’t seem to have any room for “good governance”, or broadly beneficial governance, or technocracy. There’s nothing deliberative about this form of democracy; it’s a tribal war-by-other-means. It is also not clear that this theory of social change based on demographic representation is any more effective at changing distributional outcomes than a pure politics of recognition, which we have reason to believhe is ineffectual.

Who do we expect to have power over distributional outcomes in our (and probably other) democracies? Realistically, it’s corporations. Businesses comprise most of the economic activity; businesses have the profits needed to reinvest in lobbying power for the sake of economic capture. So maybe if what we’re interested in is politics of distribution, we should stop trying to parse out the politics of recognition, with its deep dark rabbit hole of identity politics and the historical injustice and Jungian archetypal conflicts over the implications of the long arc of sexual maturity. These conversations do not seem to be getting anyone anywhere! It is, perhaps, fake news: not because the contents are fake, but because the idea that these issues are new is fake. They are perhaps just a lot of old issues stirred to conflagration by the feedback loops between social and traditional media.

If we are interested in the politics of distribution, let’s talk about something else, something that we all know must be more relevant, when it comes down to it, than the politics of recognition. I’m talking about the politics of business.

We have a rather complex economy with many competing business interests. Let’s assume that one of the things these businesses compete over is regulatory capture–their ability to influence economic policy in their favor.

When academics talk about neoliberal economic policy, they are often talking about those policies that benefit the financial sector and big businesses. But these big businesses are not always in agreement.

Take, for example, the steel tariff proposed by the Trump administration. There is no blunter example of a policy that benefits some business interests–U.S. steelmakers–and not others–U.S. manufacturers of steel-based products.

It’s important from the perspective of electoral politics to recognize that the U.S. steelmakers are a particular set of people who live in particular voting districts with certain demographics. That’s because, probably, if I am a U.S. steelworker, I will vote in the interest of my industry. Just as if I am a U.S. based urban information worker at an Internet company, I will vote in the interest of my company, which in my case would mean supporting net neutrality. If I worked for AT&T, I would vote against net neutrality, which today means I would vote Republican.

It’s an interesting fact that AT&T employs a lot more people than Google and (I believe this is the case, though I don’t know where to look up the data) that they are much more geographically distributed that Google because, you know, wires and towers and such. Which means that AT&T employees will be drawn from more rural, less diverse areas, giving them an additional allegiance to Republican identity politics.

You must see where I’m getting at. Assume that the main driver of U.S. politics is not popular will (which nobody really believes, right?) and is in fact corporate interests (which basically everybody admits, right?). In that case the politics of recognition will not be determining anything; rather it will be a symptom, an epiphenomenon, of an underlying politics of business. Immigration of high-talent foreigners then becomes a proxy issue for the economic battle between coastal tech companies and, say, old energy companies which have a much less geographically mobile labor base. Nationalism, or multinationalism, becomes a function of trade relations rather than a driving economic force in its own right. (Hence, Russia remains an enemy of the U.S. largely because Putin paid off all its debt to the U.S. and doesn’t owe it any money, unlike many of its other allies around the world.)

I would very much like to devote myself better to the understanding of politics of business because, as I’ve indicated, I think the politics of recognition have become a huge distraction.

### What happens if we lose the prior for sparse representations?

Noting this nice paper by Giannone et al., “Economic predictions with big data: The illusion of sparsity.” It concludes:

Summing up, strong prior beliefs favouring low-dimensional models appear to be necessary to support sparse representations. In most cases, the idea that the data are informative enough to identify sparse predictive models might be an illusion.

This is refreshing honesty.

In my experience, most disciplinary social sciences have a strong prior bias towards pithy explanatory theses. In a normal social science paper, what you want is a single research question, a single hypothesis. This thesis expresses the narrative of the paper. It’s what makes the paper compelling.

In mathematical model fitting, the term for such a simply hypothesis is a sparse predictive model. These models will have relatively few independent variables predicting the dependent variable. In machine learning, this sparsity is often accomplished by a regularization step. While generally well-motivate, regularization for sparsity can be done for reasons that are more aesthetic or reflect a stronger prior than is warranted.

A consequence of this preference for sparsity, in my opinion, is the prevalence of literature on power law distributions vs. log normal explanations. (See this note on disorganized heavy tail distributions.) A dense model on a log linear regression will predict a heavy tail dependent variable without great error. But it will be unsatisfying from the perspective of scientific explanation.

What seems to be an open question in the social sciences today is whether the culture of social science will change as a result of the robust statistical analysis of new data sets. As I’ve argued elsewhere (Benthall, 2016), if the culture does change, it will mean that narrative explanation will be less highly valued.

References

Benthall, Sebastian. “Philosophy of computational social science.” Cosmos and History: The Journal of Natural and Social Philosophy 12.2 (2016): 13-30.

Giannone, Domenico, Michele Lenza, and Giorgio E. Primiceri. “Economic predictions with big data: The illusion of sparsity.” (2017).

### The social value of an actually existing alternative — BLOCKCHAIN BLOCKCHAIN BLOCKCHAIN

When people get excited about something, they will often talk about it in hyberbolic terms. Some people will actually believe what they say, though this seems to drop off with age. The emotionally energetic framing of the point can be both factually wrong and contain a kernel of truth.

This general truth applies to hype about particular technologies. Does it apply to blockchain technologies and cryptocurrencies? Sure it does!

Blockchain boosters have offered utopian or radical visions about what this technology can achieve. We should be skeptical about these visions prima facie precisely in proportion to how utopian and radical they are. But that doesn’t mean that this technology isn’t accomplishing anything new or interesting.

Here is a summary of some dialectics around blockchain technology:

A: “Blockchains allow for fully decentralized, distributed, and anonymous applications. These can operate outside of the control of the law, and that’s exciting because it’s a new frontier of options!”

B1: “Blockchain technology isn’t really decentralized, distributed, or anonymous. It’s centralizing its own power into the hands of the few, and meanwhile traditional institutions have the power to crush it. Their anarchist mentality is naive and short-sighted.”

B2: “Blockchain technology enthusiasts will soon discover that they actually want all the legal institutions they designed their systems to escape. Their anarchist mentality is naive and short-sighted.”

While B1 and B2 are both critical of blockchain technology and see A as naive, it’s important to realize that they believe A is naive for contradictory reasons. B1 is arguing that it does not accomplish what it was purportedly designed to do, which is provide a foundation of distributed, autonomous systems that’s free from internal and external tyranny. B2 is arguing that nobody actually wants to be free of these kinds of tyrannies.

These are conservative attitudes that we would expect from conservative (in the sense of conservation, or “inhibiting change”) voices in society. These are probably demographically different people from person A. And this makes all the difference.

If what differentiates people is their relationship to different kinds of social institutions or capital (in the Bourdieusian sense), then it would be natural for some people to be incumbents in old institutions who would argue for their preservation and others to be willing to “exit” older institutions and join new ones. However imperfect the affordances of blockchain technology may be, they are different affordances than those of other technologies, and so they promise the possibility of new kinds of institutions with an alternative information and communications substrate.

It may well be that the pioneers in the new substrate will find that they have political problems of their own and need to reinvent some of the societal controls that they were escaping. But the difference will be that in the old system, the pioneers were relative outsiders, whereas in the new system, they will be incumbents.

The social value of blockchain technology therefore comes in two waves. The first wave is the value it provides to early adopters who use it instead of other institutions that were failing them. These people have made the choice to invest in something new because the old options were not good enough for them. We can celebrate their successes as people who have invented quite literally a new form of social capital, quite possibly literally a new form of wealth. When a small group of people create a lot of new wealth this almost immediately creates a lot of resentment from others who did not get in on it.

But there’s a secondary social value to the creation of actually existing alternative institutions and forms of capital (which are in a sense the same thing). This is the value of competition. The marginal person, who can choose how to invest themselves, can exit from one failing institution to a fresh new one if they believe it’s worth the risk. When an alternative increases the amount of exit potential in society, that increases the competitive pressure on institutions to perform. That should benefit even those with low mobility.

So, in conclusion, blockchain technology is good because it increases institutional competition. At the end of the day that reduces the power of entrenched incumbents to collect rents and gives everybody else more flexibility.

### technological determinism and economic determinism

If you are trying to explain society, politics, the history of the world, whatever, it’s a good idea to narrow the scope of what you are talking about to just the most important parts because there is literally only so much you could ever possibly say. Life is short. A principled way of choosing what to focus on is to discuss only those parts that are most significant in the sense that they played the most causally determinative role in the events in question. By widely accepted interventionist theories of causation, what makes something causally determinative of something else is the fact that in a counterfactual world in which the cause was made to be somehow different, the effect would have been different as well.

Since we basically never observe a counterfactual history, this leaves a wide open debate over the general theoretical principles one would use to predict the significance of certain phenomena over others.

One point of view on this is called technological determinism. It is the view that, for a given social phenomenon, what’s really most determinative of it is the technological substrate of it. Engineers-turned-thought-leaders love technological determinism because of course it implies that really the engineers shape society, because they are creating the technology.

Technological determinism is absolutely despised by academic social scientists who have to deal with technology and its role in society. I have a hard time understanding why. Sometimes it is framed as an objection to technologist who are avoiding responsibility for social problems they create because it’s the technology that did it, not them. But such a childish tactic really doesn’t seem to be what’s at stake if you’re critiquing technological determinism. Another way of framing the problem is the say that the way a technology affects society in San Francisco is going to be different from how it affects society in Beijing. Society has its role in a a dialectic.

So there is a grand debate of “politics” versus “technology” which reoccurs everywhere. This debate is rather one sided, since it is almost entirely constituted by political scientists or sociologists complaining that the engineers aren’t paying enough attention to politics, seeing how their work has political causes and effects. Meanwhile, engineers-turned-thought-leaders just keep spouting off whatever nonsense comes to their head and they do just fine because, unlike the social scientist critics, engineers-turned-thought-leaders tend to be rich. That’s why they are thought leaders: because their company was wildly successful.

What I find interesting is that economic determinism is never part of this conversation. It seems patently obvious that economics drives both politics and technology. You can be anywhere on the political spectrum and hold this view. Once it was called “dialectical materialism”, and it was the foundation for left-wing politics for generations.

So what has happened? Here are a few possible explanations.

The first explanation is that if you’re an economic determinist, maybe you are smart enough to do something more productive with your time than get into debates about whether technology or politics is more important. You would be doing something more productive, like starting a business to develop a technology that manipulates political opinion to favor the deregulation of your business. Or trying to get a socialist elected so the government will pay off student debts.

A second explanation is… actually, that’s it. That’s the only reason I can think of. Maybe there’s another one?

### The Data Processing Inequality and bounded rationality

I have long harbored the hunch that information theory, in the classic Shannon sense, and social theory are deeply linked. It has proven to be very difficult to find an audience for this point of view or an opportunity to work on it seriously. Shannon’s information theory is widely respected in engineering disciplines; many social theorists who are unfamiliar with it are loathe to admit that something from engineering should carry essential insights for their own field. Meanwhile, engineers are rarely interested in modeling social systems.

I’ve recently discovered an opportunity to work on this problem through my dissertation work, which is about privacy engineering. Privacy is a subtle social concept but also one that has been rigorously formalized. I’m working on formal privacy theory now and have been reminded of a theorem from information theory: the Data Processing Theorem. What strikes me about this theorem is that is captures an point that comes up again and again in social and political problems, though it’s a point that’s almost never addressed head on.

The Data Processing Inequality (DPI) states that for three random variables, X, Y, and Z, arranged in Markov Chain such that $X \rightarrow Y \rightarrow Z$, then $I(X,Z) \leq I(X,Y)$, where here $I$ stands for mutual information. Mutual information is a measure of how much two random variables carry information about each other. If $I(X,Y) = 0$, that means the variables are independent. $I(X,Y) \geq 0$ always–that’s just a mathematical fact about how it’s defined.

The implications of this for psychology, social theory, and artificial intelligence are I think rather profound. It provides a way of thinking about bounded rationality in a simple and generalizable way–something I’ve been struggling to figure out for a long time.

Suppose that there’s a big world out the, $W$ and there’s am organism, or a person, or a sociotechnical organization within it, $Y$. The world is big and complex, which implies that it has a lot of informational entropy, $H(W)$. Through whatever sensory apparatus is available to $Y$, it acquires some kind of internal sensory state. Because this organism is much small than the world, its entropy is much lower. There are many fewer possible states that the organism can be in, relative to the number of states of the world. $H(W) >> H(Y)$. This in turn bounds the mutual information between the organism and the world: $I(W,Y) \leq H(Y)$

Now let’s suppose the actions that the organism takes, $Z$ depend only on its internal state. It is an agent, reacting to its environment. Well whatever these actions are, they can only be so calibrated to the world as the agent had capacity to absorb the world’s information. I.e., $I(W,Z) \leq H(Y) << H(W)$. The implication is that the more limited the mental capacity of the organism, the more its actions will be approximately independent of the state of the world that precedes it.

There are a lot of interesting implications of this for social theory. Here are a few cases that come to mind.

I've written quite a bit here (blog links) and here (arXiv) about Bostrom’s superintelligence argument and why I’m generally not concerned with the prospect of an artificial intelligence taking over the world. My argument is that there are limits to how much an algorithm can improve itself, and these limits put a stop to exponential intelligence explosions. I’ve been criticized on the grounds that I don’t specify what the limits are, and that if the limits are high enough then maybe relative superintelligence is possible. The Data Processing Inequality gives us another tool for estimating the bounds of an intelligence based on the range of physical states it can possibly be in. How calibrated can a hegemonic agent be to the complexity of the world? It depends on the capacity of that agent to absorb information about the world; that can be measured in information entropy.

A related case is a rendering of Scott’s Seeing Like a State arguments. Why is it that “high modernist” governments failed to successfully control society through scientific intervention? One reason is that the complexity of the system they were trying to manage vastly outsized the complexity of the centralized control mechanisms. Centralized control was very blunt, causing many social problems. Arguably, behavioral targeting and big data centers today equip controlling organizations with more informational capacity (more entropy), but they
still get it wrong sometimes, causing privacy violations, because they can’t model the entirety of the messy world we’re in.

The Data Processing Inequality is also helpful for explaining why the world is so messy. There are a lot of different agents in the world, and each one only has so much bandwidth for taking in information. This means that most agents are acting almost independently from each other. The guiding principle of society isn’t signal, it’s noise. That explains why there are so many disorganized heavy tail distributions in social phenomena.

Importantly, if we let the world at any time slice be informed by the actions of many agents acting nearly independently from each other in the slice before, then that increases the entropy of the world. This increases the challenge for any particular agent to develop an effective controlling strategy. For this reason, we would expect the world to get more out of control the more intelligence agents are on average. The popularity of the personal computer perhaps introduced a lot more entropy into the world, distributed in an agent-by-agent way. Moreover, powerful controlling data centers may increase the world’s entropy, rather than redtucing it. So even if, for example, Amazon were to try to take over the world, the existence of Baidu would be a major obstacle to its plans.

There are a lot of assumptions built into these informal arguments and I’m not wedded to any of them. But my point here is that information theory provides useful tools for thinking about agents in a complex world. There’s potential for using it for modeling sociotechnical systems and their limitations.

### Net neutrality

What do I think of net neutrality?

I think it’s bad for my personal self-interest. I am, economically, a part of the newer tech economy of software and data. I believe this economy benefits from net neutrality. I also am somebody who loves The Web as a consumer. I’ve grown up with it. It’s shaped my values.

From a broader perspective, I think ending net neutrality will revitalize U.S. telecom and give it leverage over the ‘tech giants’–Google, Facebook, Apple, Amazon—that have been rewarded by net neutrality policies. Telecom is a platform, but it had been turned into a utility platform. Now it can be a full-featured market player. This gives it an opportunity for platform envelopment, moving into the markets of other companies and bundling them in with ISP services.

Since this will introduce competition into the market and other players are very well-established, this could actually be good for consumers because it breaks up an oligopoly in the services that are most user-facing. On the other hand, since ISPs are monopolists in most places, we could also expect Internet-based service experience quality to deteriorate in general.

What this might encourage is a proliferation of alternatives to cable ISPs, which would be interesting. Ending net neutrality creates a much larger design space in products that provision network access. Mobile companies are in this space already. So we could see this regulation as a move in favor of the cell phone companies, not just the ISPs. This too could draw surplus away the big four.

This probably means the end of “The Web”. But we’d already seen the end of “The Web” with the proliferation of apps as a replacement for Internet browsing. IoT provides yet another alternative to “The Web”. I loved the Web as a free, creative place where everyone could make their own website about their cat. It had a great moment. But it’s safe to say that it isn’t what it used to be. In fifteen years it may be that most people no longer visit web sites. They just use connected devices and apps. Ending net neutrality means that the connectivity necessary for these services can be bundled in with the service itself. In the long run, that should be good for consumers and even the possibility of market entry for new firms.

In the long run, I’m not sure “The Web” is that important. Maybe it was a beautiful disruptive moment that will never happen again. Or maybe, if there were many more kinds of alternatives, “The Web” would return to being the quirky, radically free and interesting thing it was before it got so mainstream. Remember when The Web was just The Well (which is still around), and only people who were really curious about it bothered to use it? I don’t, because that was well before my time. But it’s possible that the Internet in its browse-happy form will become something like that again.

I hadn’t really thought about net neutrality very much before, to be honest. Maybe there are some good rebuttals to this argument. I’d love to hear them! But for now, I think I’m willing to give the shuttering of net neutrality a shot.

Nils Gilman argues that the future of the world is wide open because neoliberalism has been discredited. So what’s the future going to look like?

Given that neoliberalism is for the most part an economic vision, and that competing theories have often also been economic visions (when they have not been political or theological theories), a compelling futurist approach is to look out for new thinking about economics. The three articles below have recently taught me something new about economics:

Dani Rodrik. “Rescuing Economics from Neoliberalism”, Boston Review. (link)

This article makes the case that the association frequently made between economics as a social science and neoliberalism as an ideology is overdrawn. Of course, probably the majority of economists are not neoliberals. Rodrik is defending a view of economics that keeps its options open. I think he overstates the point with the claim, “Good economists know that the correct answer to any question in economics is: it depends.” This is just simply incorrect, if questions have their assumptions bracketed well enough. But since Rodrik’s rhetorical point appears to be that economists should not be dogmatists, he can be forgiven this overstatement.

As an aside, there is something compelling but also dangerous to the view that a social science can provide at best narrowly tailored insights into specific phenomena. These kinds of ‘sciences’ wind up being unaccountable, because the specificity of particular events prevent the repeated testing of the theories that are used to explain them. There is a risk of too much nuance, which is akin to the statistical concept of overfitting.

A different kind of article is:

Seth Ackerman. “The Disruptors” Jacobin. (link)

An interview with J.W. Mason in the smart socialist magazine, Jacobin, that had the honor of a shout out from Matt Levine’s popular “Money Talk” Bloomberg column (column?). On of the interesting topics it raises is whether or not mutual funds, in which many people invest in a fund that then owns a wide portfolio of stocks, are in a sense socialist and anti-competitive because shareholders no longer have an interest in seeing competition in the market.

This is original thinking, and the endorsement by Levine is an indication that it’s not a crazy thing to consider even for the seasoned practical economists in the financial sector. My hunch at this point in life is that if you want to understand the economy, you have to understand finance, because they are the ones whose job it is to profit from their understanding of the economy. As a corollary, I don’t really understand the economy because I don’t have a great grasp of the financial sector. Maybe one day that will change.

Speaking of expertise being enhanced by having ‘skin in the game’, the third article is:

Nassim Nicholas Taleb. “Inequality and Skin in the Game,” Medium. (link)

I haven’t read a lot of Taleb though I acknowledge he’s a noteworthy an important thinker. This article confirmed for me the reputation of his style. It was also a strikingly fresh look at economics of inequality, capturing a few of the important things mainstream opinion overlooks about inequality, namely:

• Comparing people at different life stages is a mistake when analyzing inequality of a population.
• A lot of the cause of inequality is randomness (as opposed to fixed population categories), and this inequality is inevitable

He’s got a theory of what kinds of inequality people resent versus what they tolerate, which is a fine theory. It would be nice to see some empirical validation of it. He writes about the relationship between ergodicity and inequality, which is interesting. He is scornful of Piketty and everyone who was impressed by Piketty’s argument, which comes off as unfriendly.

Much of what Taleb writes about the need to understand the economy through a richer understanding of probability and statistics strikes me as correct. If it is indeed the case that mainstream economics has not caught up to this, there is an opportunity here!

### Personal data property rights as privacy solution. Re: Cofone, 2017

I’m working my way through Ignacio Cofone’s “The Dynamic Effect of Information Privacy Law” (2017) (link), which is an economic analysis of privacy. Without doing justice to the full scope of the article, it must be said that it is a thorough discussion of previous information economics literature and a good case for property rights over personal data. In a nutshell, one can say that markets are good for efficient and socially desirable resource allocation, but they are only good at this when there are well crafted property rights to the goods involved. Personal data, like intellectual property, is a tricky case because of the idiosyncrasies of data–its has zero-ish marginal cost, it seems to get more valuable when it’s aggregated, etc. But like intellectual property, we should expect under normal economic rationality assumptions that the more we protect the property rights of those who create personal data, the more they will be incentivized to create it.

I am very warm to this kind of argument because I feel there’s been a dearth of good information economics in my own education, though I have been looking for it! I do believe there are economic laws and that they are relevant for public policy, let alone business strategy.

I have concerns about Cofone’s argument specifically, which are these:

First, I have my doubts that seeing data as a good in any classical economic sense is going to work. Ontologically, data is just too weird for a lot of earlier modeling methods. I have been working on a different way of modeling information flow economics that tries to capture how much of what we’re concerned with are information services, not information goods.

My other concern is that Cofone’s argument gives users/data subjects credit for being rational agents, capable of addressing the risks of privacy and acting accordingly. Hoofnagle and Urban (2014) show that this is empirically not the case. In fact, if you take the average person who is not that concerned about their privacy on-line and start telling them facts about how their data is being used by third-parties, etc., they start to freak out and get a lot more worried about privacy.

This throws a wrench in the argument that stronger personal data property rights would lead to more personal data creation, therefore (I guess it’s implied) more economic growth. People seem willing to create personal data and give it away, despite actual adverse economic incentives, because cat videos are just so damn appealing. Or something. It may generally be the case that economic modeling is used by information businesses but not information policy people because average users are just so unable to act rationally; it really is a domain better suited to behavioral economics and usability research.

I’m still holding out though. Just because big data subjects are not homo economicus doesn’t mean that an economic analysis of their activity is pointless. It just means we need to have a more sophisticated economic model, on that takes into account how there are many different classes of user that are differently informed. This kind of economic modeling, and empirically fitting it to data, is within our reach. We have the technology.

References

Cofone, Ignacio N. “The Dynamic Effect of Information Privacy Law.” Minn. JL Sci. & Tech. 18 (2017): 517.

Hoofnagle, Chris Jay, and Jennifer M. Urban. “Alan Westin’s privacy homo economicus.” (2014).