Tag: hal varian

“The Microeconomics of Complex Economies”

I’m dipping into The microeconomics of complex economies: Evolutionary, institutional, neoclassical, and complexity perspectives, by Elsner, Heinrich, and Schwardt, all professors at the University of Bremen.

It is a textbook, as one would teach a class from. It is interesting because it is self-consciously written as a break from neoclassical microeconomics. According to the authors, this break had been a long time coming but the last straw was the 2008 financial crisis. This at last, they claim, showed that neoclassical faith in market equilibrium was leaving something important out.

Meanwhile, “heterodox” economics has been maturing for some time in the economics blogosphere, while complex systems people have been interested in economics since the emergence of the field. What Elsner, Heinrich, and Schwardt appear to be doing with this textbook is providing a template for an undergraduate level course on the subject, legitimizing it as a discipline. They are not alone. They cite Bowles’s Microeconomics as worthy competition.

I have not yet read the chapter of the Elsner, Heinirch, and Schwardt book that covers philosophy of science and its relationship to the validity of economics. It looks from a glance at it very well done. But I wanted to note my preliminary opinion on the matter given my recent interest in Shapiro and Varian‘s information economics and their claim to be describing ‘laws of economics’ that provide a reliable guide to business strategy.

In brief, I think Shapiro and Varian are right: they do outline laws of economics that provide a reliable guide to business strategy. This is in fact what neoclassical economics is good for.

What neoclassical economics is not always great at is predicting aggregate market behavior in a complex world. It’s not clear if any theory could ever be good at predicting aggregate market behavior in a complex world. It is likely that if there were one, it would be quickly gamed by investors in a way that would render it invalid.

Given vast information asymmetries it seems the best one could hope for is a theory of the market being able to assimilate the available information and respond wisely. This is the Hayekian view, and it’s not mainstream. It suffers the difficulty that it is hard to empirically verify that a market has performed optimally given that no one actor, including the person attempting the verify Hayekian economic claims, has all the information to begin with. Meanwhile, it seems that there is no sound a priori reason to believe this is the case. Epstein and Axtell (1996) have some computational models where they test when agents capable of trade wind up in an equilibrium with market-clearing prices and in their models this happens under only very particular an unrealistic conditions.

That said, predicting aggregate market outcomes is a vastly different problem than providing strategic advice to businesses. This is the point where academic critiques of neoclassical economics miss the mark. Since phenomena concerning supply and demand, pricing and elasticity, competition and industrial organization, and so on are part of the lived reality of somebody working in industry, formalizations of these aspects of economic life can be tested and propagated by many more kinds of people than the phenomena of total market performance. The latter is actionable only for a very rare class of policy-maker or financier.

References

Bowles, S. (2009). Microeconomics: behavior, institutions, and evolution. Princeton University Press.

Elsner, W., Heinrich, T., & Schwardt, H. (2014). The microeconomics of complex economies: Evolutionary, institutional, neoclassical, and complexity perspectives. Academic Press.

Epstein, Joshua M., and Robert Axtell. Growing artificial societies: social science from the bottom up. Brookings Institution Press, 1996.

Economics of expertise and information services

We have no considered two models of how information affects welfare outcomes.

In the first model, inspired by an argument from Richard Posner, the are many producers (employees, in the specific example, but it could just as well be cars, etc.) and a single consumer. When the consumer knows nothing about the quality of the producers, the consumer gets an average quality producer and the producers split the expected utility of the consumer’s purchase equally. When the consumer is informed, she benefits and so does the highest quality producer, at the detriment of the other producers.

In the second example, inspired by Shapiro and Varian’s discussion of price differentiation in the sale of information goods, there was a single producer and many consumers. When the producer knows nothing about the “quality” of the consumers–their willingness to pay–the producer charges all consumers a profit-maximizing price. This price leaves many customers out of reach of the product, and many others getting a consumer surplus because the product is cheap relative to their demand. When the producer is more informed, they make more profit by selling as personalized prices. This lets the previously unreached customers in on the product at a compellingly low price. It also allows the producer to charge higher prices to willing customers; they capture what was once consumer surplus for themselves.

In both these cases, we have assumed that there is only one kind of good in play. It can vary numerically in quality, which is measured in the same units as cost and utility.

In order to bridge from theory of information goods to theory of information services, we need to take into account a key feature of information services. Consumers buy information when they don’t know what it is they want, exactly. Producers of information services tailor what they provide to the specific needs of the consumers. This is true for information services like search engines but also other forms of expertise like physician’s services, financial advising, and education. It’s notable that these last three domains are subject to data protection laws in the United States (HIPAA, GLBA, and FERPA) respectively, and on-line information services are an area where privacy and data protection are a public concern. By studying the economics of information services and expertise, we may discover what these domains have in common.

Let’s consider just a single consumer and a single producer. The consumer has a utility function $\vec{x} \sim X$ (that is, sampled from random variable $X$, specifying the values it gets for the consumption of each of $m = \vert J \vert$ products. We’ll denote with $x_j$ the utility awarded to the consumer for the consumption of product $j \in J$.

The catch is that the consumer does not know $X$. What they do know is $y \sim Y$, which is correlated with $X$ is some way that is unknown to them. The consumer tells the producer $y$, and the producer’s job is to recommend to them $j \in J$ that will most benefit them. We’ll assume that the producer is interested in maximizing consumer welfare in good faith because, for example, they are trying to promote their professional reputation and this is roughly in proportion to customer satisfaction. (Let’s assume they pass on costs of providing the product to the consumer).

As in the other cases, let’s consider first the case where the acting party has no useful information about the particular customer. In this case, the producer has to choose their recommendation $\hat j$ based on their knowledge of the underlying probability distribution $X$, i.e.:

$\hat j = arg \max_{j \in J} E[X_j]$

where $X_j$ is the probability distribution over $x_j$ implied by $X$.

In the other extreme case, the producer has perfect information of the consumer’s utility function. They can pick the truly optimal product:

$\hat j = arg \max_{j \in J} x_j$

How much better off the consumer is in the second case, as opposed to the first, depends on the specifics of the distribution $X$. Suppose $X_j$ are all independent and identically distributed. Then an ignorant producer would be indifferent to the choice of $\hat j$, leaving the expected outcome for the consumer $E[X_j]$, whereas the higher the number of products $m$ the more $\max_{j \in J} x_j$ will approach the maximum value of $X_j$.

In the intermediate cases where the producer knows $y$ which carries partial information about $\vec{x}$, they can choose:

$\hat j = arg \max_{j \in J} E[X_j \vert y] =$

$arg \max_{j \in J} \sum x_j P(x_j = X_j \vert y) =$

$arg \max_{j \in J} \sum x_j P(y \vert x_j = X_j) P(x_j = X_j)$

The precise values of the terms here depend on the distributions $X$ and $Y$. What we can know in general is that the more informative is $y$ is about $x_j$, the more the likelihood term $P(y \vert x_j = X_j)$ dominates the prior $P(x_j = X_j)$ and the condition of the consumer improves.

Note that in this model, it is the likelihood function $P(y \vert x_j = X_j)$ that is the special information that the producer has. Knowledge of how evidence (a search query, a description of symptoms, etc.) are caused by underlying desire or need is the expertise the consumers are seeking out. This begins to tie the economics of information to theories of statistical information.

Formalizing welfare implications of price discrimination based on personal information

In my last post I formalized Richard Posner’s 1981 argument concerning the economics of privacy. This is just one case of the economics of privacy. A more thorough analysis of the economics of privacy would consider the impact of personal information flow in more aspects of the economy. So let’s try another one.

One major theme of Shapiro and Varian’s Information Rules (1999) is the importance of price differentiation when selling information goods and how the Internet makes price differentiation easier than ever. Price differentiation likely motivates much of the data collection on the Internet, though it’s a practice that long predates the Internet. Shapiro and Varian point out that the “special offers” one gets from magazines for an extension to a subscription may well offer a personalized price based on demographic information. What’s more, this personalized price may well be an experiment, testing for the willingness of people like you to pay that price. (See Acquisti and Varian, 2005 for a detailed analysis of the economics of conditioning prices on purchase history.)

The point of this post is to analyze how a firm’s ability to differentiate its prices is a function of the knowledge it has about its customers and hence outcomes change with the flow of personal information. This makes personalized price differentiation a sub-problem of the economics of privacy.

To see this, let’s assume there are a number of customers for a job, $i \in I$, where the number of customers is $n = \left\vert{I}\right\vert$. Let’s say each has a willingness to pay for the firm’s product, $x_i$. Their willingness to pay is sampled from an underlying probability distribution $x_i \sim X$.

Note two things about how we are setting up this model. The first is that it closely mirrors our formulation of Posner’s argument about hiring job applicants. Whereas before the uncertain personal variable was aptitude for a job, in this case it is willingness to pay.

The second thing to note is that whereas it is typical to analyze price differentiation according to a model of supply and demand, here we are modeling the distribution of demand as a random variable. This is because we are interested in modeling information flow in a specific statistical sense. What we will find is that many of the more static economic tools translate well into a probabilistic domain, with some twists.

Now suppose the firm knows $X$ but does not know any specific $x_i$. Knowing nothing to differentiate the customers, the firm will choose to offer the product at the same price $z$ to everybody. Each customer will buy the product if $x_i > z$, and otherwise won’t. Each customer that buys the product contributes $z$ to the firm’s utility (we are assuming an information good with near zero marginal cost). Hence, the firm will pick $\hat z$ according to the following function:

$\hat z = arg \max_z E[\sum_i z [x_i > z]] =$

$\hat z = arg \max_z \sum_i E[z [x_i > z]] =$

$\hat z = arg \max_z \sum_i z E[[x_i > z]] =$

$\hat z = arg \max_z \sum_i z P(x_i > z) =$

$\hat z = arg \max_z \sum_i z P(X > z)$

Where $[x_i > z]$ is a function with value 1 if $x_i > z$ and 0 otherwise; this is using Iverson bracket notation.

This is almost identical to the revenue-optimizing strategy of price selection more generally, and it has a number of similar properties. One property is that for every customer for whom $x_i > z$, there is a consumer surplus of utility $late x_i – z$, that feeling of joy the customer gets for having gotten something valuable for less than they would have been happy to pay for it. There is also the deadweight loss of customers for whom $z > x_i$. These customers get 0 utility from the product and pay nothing to the producer despite their willingness to pay.

Now consider the opposite extreme, wherein the producer knows the willingness to pay of each customer $x_i$ and can pick a personalized price $z_i$ accordingly. The producer can price $z_i = x_i - \epsilon$, effectively capturing the entire demand $\sum_i x_i$ as producer surplus, while reducing all consumer surplus and deadweight loss to zero.

What are the welfare implications of the lack of consumer privacy?

Like in the case of Posner’s employer, the real winner here is the firm, who is able to capture all the value added to the market by the increased flow of information. In both cases we have assumed the firm is a monopoly, which may have something to do with this result.

As for consumers, there are two classes of impact. For those with $x_i > \hat z$, having their personal willingness to pay revealed to the firm means that they lose their consumer surplus. Their welfare is reduced.

For those consumers with $x_i < \hat z$, these discover that they now can afford the product as it is priced close to their willingness to pay.

Unlike in Posner's case, "the people" here are more equal when their personal information is revealed to the firm because now the firm is extracting every spare ounce of joy it can from each of them, whereas before some consumers were able to enjoy low prices relative to their idiosyncratically high appreciation for the good.

What if the firm has access to partial information about each consumer $y_i$ that is a clue to their true $x_i$ without giving it away completely? Well, since the firm is a Bayesian reasoner they now have the subjective belief $P(x_i \vert y_i)$ and will choose each $z_i$ in a way that maximizes their expected profit from each consumer.

$z_i = arg \max_z E[z [P(x_i > z \vert y_i)]]$

The specifics of the distributions $X$, $Y$, and $P(Y | X)$ all matter for the particular outcomes here, but intuitively one would expect the results of partial information to fall somewhere between the extremes of undifferentiated pricing and perfect price discrimination.

Perhaps the more interesting consequence of this analysis is that the firm has, for each consumer, a subjective probabilistic distribution of that consumer’s demand. Their best strategy for choosing the personalized price is similar to that of choosing a price for a large uncertain consumer demand base, only now the uncertainty is personalized. This probabilistic version of classic price differentiation theory may be more amenable to Bayesian methods, data science, etc.

References

Acquisti, A., & Varian, H. R. (2005). Conditioning prices on purchase history. Marketing Science, 24(3), 367-381.

Shapiro, C., & Varian, H. R. (1998). Information rules: a strategic guide to the network economy. Harvard Business Press.

From information goods to information services

Continuing to read through Information Rules, by Shapiro and Varian (1999), I’m struck once again by its clear presentation and precise wisdom. Many of the core principles resonate with my experience in the software business when I left it in 2011 for graduate school. I think it’s fair to say that Shapiro and Varian anticipated the following decade of  the economics of content and software distribution.

What they don’t anticipate, as far as I can tell, is what has come to dominate the decade after that, this decade. There is little in Information Rules that addresses the contemporary phenomena of cloud computing and information services, such as Software-as-a-Service, Platforms-as-a-Service, and Infrastructure-as-a-Service. Yet these are clearly the kinds of services that have come to dominate the tech market.

That’s an opening. According to a business manager in 2014, there’s no book yet on how to run an SaaS company. While sure that if I were slightly less lazy I would find several, I wonder if they are any good. By “any good”, I mean would they hold up to scientific standards in their elucidation of economic law, as opposed to being, you know, business books.

One of the challenges of working on this which has bothered me since I first became curious about these problems is that there is not very good elegant formalism available for representing competition between computing agents. The best that’s out there is probably in the AI literature. But that literature is quite messy.

Working up from something like Information Rules might be a more promising way of getting at some of these problems. For example, Shapiro and Varian start from the observation that information goods have high fixed (often, sunk) costs and low marginal costs to reproduce. This leads them to the conclusion that the market cannot look like a traditional competitive market with multiple firms selling similar goods but rather must either have a single dominant firm or a market of many similar but differentiated products.

The problem here is that most information services, even “simple” ones like a search engine, are not delivering a good. They are being responsive to some kind of query. The specific content and timing of the query, along with the state of the world at the time of the query, are unique. Consumers may make the same query with varying demand. The value-adding activity is not so much creating the good as it is selecting the right response to the query. And who can say how costly this is, marginally?

On the other hand, this framing obscures something important about information goods, which is that all information goods are, in a sense, a selection of bits from the wide range of possible bits one might send or receive. This leads to my other frustration with information economics, which is that it is insufficiently tied to the statistical definition of information and the modeling tools that have been built around it. This is all the more frustrating because I suspect that in advanced industrial settings these connections have been made and are used with confidence. However, it had been slow to make it into mainstream understanding. There’s another opportunity here.

Shapiro and Varian: scientific “laws of economics”

I’ve been amiss in not studying Shapiro and Varian’s Information Rules: A Strategic Guide to the Network Economy (1998, link) more thoroughly. In my years in the tech industry and academic study, there are few sources that deal with the practical realities of technology and society as clearly as Shapiro and Varian. As I now turn my attention more towards the rationale for various forms of information law and find how much of it is driven by considerations of economics, I have to wonder why this was not something I’ve given more emphasis in my graduate study so far.

The answer that comes immediately to mind is that throughout my academic study of the past few years I’ve encountered a widespread hostility to economics from social scientists of other disciplines. This hostility resembles, though is somewhat different from, the hostility social scientists other other stripes have had (in my experience) for engineers. The critiques have been along the lines that economists are powerful disproportionately to the insight provided by the field, that economists are focused too narrowly on certain aspects of social life to the exclusion of others that are just as important, that economists are arrogant in their belief that their insights about incentives apply to other areas of social life besides the narrow concerns of the economy, that economists mistakenly think their methods are more scientific or valid than other social scientists, that economics is in the business of enshrining legal structures into place that give their conclusions more predictive power than they would have in other legal regimes and, as of the most recent news cycle, that the field of economics is hostile to women.

This is a strikingly familiar pattern of disciplinary critique, as it seems to be the same one levied at any field that aims to “harden” inquiry into social life. The encroachment of engineering disciplines and physicists into social explanation has come with similar kinds of criticism. These criticisms, it must be noted, contain at least one contradiction: should economists be concerned about issues besides the economy, or not? But the key issue, as with most disciplinary spats, is the politics of a lot of people feeling dismissed or unheard or unfunded.

Putting all this aside, what’s interesting about the opening sections of Shapiro and Varian’s book is their appeal to the idea of laws of economics, as if there were such laws analogous to laws of physics. The idea is that trends in the technology economy are predictable according to these laws, which have been learned through observation and formalized mathematically, and that these laws should therefore be taught for the benefit of those who would like to participate successfully in that economy.

This is an appealing idea, though one that comes under criticism, you know, from the critics, with a predictability that almost implies a social scientific law. This has been a debate going back to discussions of Marx and communism. Early theorists of the market declared themselves to have discovered economic laws. Marx, incidentally, also declared that he had discovered (different) economic laws, albeit according to the science of dialectical materialism. But the latter declared that the former economic theories hide the true scientific reality of the social relations underpinning the economy. These social relations allowed for the possibility of revolution in a way that an economy of goods and prices abstracted from society did not.

As one form of the story goes, the 20th century had its range of experiments with ways of running an economy. Those most inspired by Marxism had mass famines and other unfortunate consequences. Those that took their inspiration from the continually evolving field of increasingly “neo”-classical economics, with its variations of Keynesianism, monetarism, and the rest, had some major bumps (most recently the 2008 financial crisis) but tends to improve over time with historical understanding and the discovery of, indeed, laws of economics. And this is why Janet Yellen and Mario Draghi are now warning against removing the post-crisis financial market regulations.

This offers an anecdotal counter to the narrative that all economists ever do is justify more terrible deregulation at the expense of the lived experience of everybody else. The discovery of laws of economics can, indeed, be the basis for economic regulation; in fact this is often the case. In point of fact, it may be that this is one of the things that tacitly motivates the undermining of economic epistemology: the fact that if the laws of economics were socially determined to be true, like the laws of physics, such that everybody ought to know them, it would lead to democratic will for policies that would be opposed to the interests of those who have heretofore enjoyed the advantage of their privileged (i.e., not universally shared) access to the powerful truth about markets, technology, etc.

Which is all to say: I believe that condemnations of economics as a field are quite counterproductive, socially, and that the scientific pursuit of the discovery of economic laws is admirable and worthy. Those that criticize economics for this ambition, and teach their students to do so, imperil everyone else and should stop.