Digifesto

Tag: macroeconomics

Crypto, macro, and information law

Dogecoin is in the news this week because of Elon Musk’s pump and dump in the latest of notable asset bubbles fueled in small part by Internet-informed, perhaps frivolous, day-traders. The phenomenon reminds me of this curious essay about viral art. It concludes:

The doge meme is a Goldsmithian piece, passing ephemerally through a network of peers. In a LaBeoufian moment, Jackson Palmer invented Dogecoin, capturing the meme and using it to leverage networks of power. Now it is art post-LaBeouf in its greatest form: authorless art as economic power, transmitted over networks. As the synthesized culmination of the traditions of economics and Western literature, DogeCoin is one of the greatest achievements in the history of art, if not the greatest.

This paragraph is perhaps best understood, if at all, as an abstruse joke. The essay itself is most likely not written by “Niklos Szabo”, easily conflated with Nick Szabo, one of the deeper thinkers behind cryptocurrency more generally. The real Szabo has written much more seriously and presciently about culture and the economy. As an aside, I believe Szabo’s writings about book consciousness prefigure Hildebrandt’s work on the role of the printing press as a medium contributing to the particular character of text-driven law. However if the enduring success of cryptocurrencies validates Szabo’s economics more than his cultural theory. His 2002 paper “Shelling out: the origins of money” is a compelling history of currency. Notably, it is not a work of formal economic theory. Rather, it draws on historical and anthropological examples to get at the fundamentals of the role currency plays in society. This study leads to the conclusion that currency must be costly to created and transferable with relatively low transaction costs. Bitcoin, for example, was designed to have these qualities.

What Szabo does not discuss in “Shelling out” is the other thing Bitcoin is most known for, which is speculative asset bubble pricing. Cryptocurrency has lurched into the mainstream in fits of speculative enthusiasm followed by crashes and breakdowns. It is risky.

Salome Viljoen and I are writing about financial regulations as part of our “Data Market Discipline” project. One takeaway from this work is that the major financial regulations in the United States were responses to devastating financial crises, such as the Great Depression and the 2008 financial crisis, which were triggered by the collapse of an asset bubble. So while currency is an old invention and the invention of new currencies is interesting, the project of maintaining a stable financial system is a relatively more recent legal project and an unfinished one at that. It is so much more unfinished for cryptocurrencies, that are not controlled by a central banking system, than for national fiat currencies for which, for example, interest rates can be used as a calibrating tool.

These are not idle theoretical points. Rather, they are at the heart of questions surrounding the recovery of the economy from COVID-related setbacks. Money from stimulus checks going to people who have no reason to increase their consumption (cf. Carroll et al., 2020) is perhaps responsible for the influx of retail investment into equities markets and, in particular, Reddit-coordinated asset bubbles such as the ones we’re seeing recently with Gamestop and Dogecoin. The next stimulus package being prepared by the Biden administration has sounded alarms from parts of the Economics establishment that it will spur inflation, while Janet Yellen has argued that this outcome can be prevented using standard monetary policy tools such as the increase of interest rates. Arguably, the recent rising price of Bitcoin is due to this threat of macro-economic stability of the dollar-denominated financial system.

I don’t mean any of this conclusively. Rather, I’m writing this to register my growing realization that the myriad Internet effects on culture, economy, and the law are often much more motivated by movements in internationally coupled financial systems than “technology policy” specialists or “public interest technologists” are inclined to admit. We are inclined, because of our training in something else — whether it be computer science, environmental law, political philosophy, or whatever — to seek out metaphors from our own domain of expertise. But many of the most trenchant analyses of why the current technological landscape seems a bit off come down to failures of the price mechanism in the digital economy. I’m thinking of Kapczynski ‘s (2011) critique of the price mechanism in relation to intellectual property, and Strandburg’s (2013) analysis of the failure of pricing in on-line services. We have on the one hand the increasingly misconceptualized “Silicon Valley”‘s commitment to a “free market” and on the other hand few of the conditions under which a “free market” is classically considered to be efficient. The data economy does not meet even classically liberal (let alone New, more egalitarian, Liberal) standards of justice. And liberal legal theory is not equipped, Jake Goldenfein and I have argued, to grapple with this reality.

What progress can be made?

Maybe there is something somebody with enormous wealth or institutional power could do to change the situation. I’m not one of those people. However, there is some evidence to support the point that at the root of these problems is a conceptual, intellectual failing to understand what’s going on at the root of it.

In some recent work with Kathy Strandburg, we are examining the conceptual roots of the highly influential Law and Economics (L&E) branch of legal scholarship. This field absorbs the techniques of neoclassical economics and develops them into actionable policy proposals and legal rules of thumb. It has come under political criticism from the recently formed Law and Political Economy (LPE) movement. Interestingly, it has also been critique from a “Law and Macroeconomics” perspective, which argues that L&E should really be called “law and microeconomics”, because of its inability to internalize macroeconomic concepts such as the business cycle or changes in monetary policy.

Among the assumptions at the roots of L&E are notions of optimality and efficiency that make somewhat naive assumptions about the nature of price and money. For example, Kaldor-Hicks efficiency, a relaxation of Pareto efficiency used in welfare economics as applied to L&E, allows for transactions that alter the situations of agents so long as one agent, who gains, could theoretically compensate the other for their losses (see Feldman, 1998), . This concept is used to consider social welfare optimal, resolving the neoclassical problem of the incomparability of individual utilities through an implicit pricing mechanism. This leads L&E to favor “wealth maximizing” policies.

However, grounding legal theory in the idea of a robust price mechanism capable of subsuming all differences in individual preferences is quite naive in a digital economy that is always already at the intersection of many different currencies (including cryptocurrency), variable and politically vulnerable systems of credit and debt, and characterized by markets that do not that the legal scaffolding needed to drive them towards “true” prices. If Mirowski and Nik-Khah (2017) are correct and Economists have abandoned earlier notions of “truth” to faith in the market’s price as a “truth” derived from streams of information, something is indeed amiss. Data is not a commodity, and regulations that treat data flows as commodity exchanges not well matched to the reality. In the Hayekian model, price is the signal that combines available information. In the data economy, the complexity topology of real data flows belies simplistic views of “the market”.

What tech law needs is a new economic model, one that, just as general relativity in physics showed how classical mechanics was a special case of more complex universal laws, reveals how data, intellectual property, and price are connected in ways that go beyond the classical liberal imagination.

References

Benthall, Sebastian and Viljoen, Salome, Data Market Discipline: From Financial Regulation to Data Governance (January 27, 2021). J. Int’l & Comparative Law – (2021)

Carroll, C. D., Crawley, E., Slacalek, J., & White, M. N. (2020). Modeling the consumption response to the CARES Act (No. w27876). National Bureau of Economic Research.

Feldman, A. M. (1998). Kaldor-hicks compensation. The new Palgrave Dictionary of economics and the law2, 417-421.

Hildebrandt, M. (2015). Smart technologies and the end (s) of law: novel entanglements of law and technology. Edward Elgar Publishing.

Kapczynski, A. (2011). The cost of price: Why and how to get beyond intellectual property internalism. UCLA L. Rev.59, 970.

Mirowski, P., & Nik-Khah, E. (2017). The knowledge we have lost in information: the history of information in modern economics. Oxford University Press.

Shekman, David. “Gamestop and the Surrounding Legal Questions.” Medium, Medium, 5 Feb. 2021, medium.com/@shekman27/gamestop-and-the-surrounding-legal-questions-fc0d1dc142d7.

Strandburg, K. J. (2013). Free fall: The online market’s consumer preference disconnect. U. Chi. Legal F., 95.

Szabo, N. (2002). Shelling out: the origins of money. Satoshi Nakamoto Institute.

Szabo, Niklos. “Art Post-LaBeouf.” Medium, Medium, 22 Sept. 2014, medium.com/@niklosszabo/art-post-labeouf-b7de5732020c.

Notes on Krussell & Smith, 1998 and macroeconomic theory

I’m orienting towards a new field through my work on HARK. A key paper in this field is Krusell and Smith, 1998 “Income and wealth heterogeneity in the macroeconomy.” The learning curve here is quite steep. These are, as usual, my notes as I work with this new material.

Krusell and Smith are approaching the problem of macroeconomic modeling on a broad foundation. Within this paradigm, the economy is imagined as a large collection of people/households/consumers/laborers. These exist at a high level of abstraction and are imagined to be intergenerationally linked. A household might be an immortal dynasty.

There is only one good: capital. Capital works in an interesting way in the model. It is produced every time period by a combination of labor and other capital. It is distributed to the households, apportioned as both a return on household capital and as a wage for labor. It is also consumed each period, for the utility of the households. So all the capital that exists does so because it was created by labor in a prior period, but then saved from immediate consumption, then reinvested.

In other words, capital in this case is essentially money. All other “goods” are abstracted way into this single form of capital. The key thing about money is that it can be saved and reinvested, or consumed for immediate utility.

Households also can labor, when they have a job. There is an unemployment rate and in the model households are uniformly likely to be employed or not, no matter how much money they have. The wage return on labor is determined by an aggregate economic productivity function. There are good and bad economic periods. These are determine exogenously and randomly. There are good times and bad times; employment rates are determined accordingly. One major impetus for saving is insurance for bad times.

The problem raised by Krusell and Smith in this, what they call their ‘baseline model’, is that because all households are the same, the equilibrium distribution of wealth is far too even compared with realistic data. It’s more normally distributed than log-normally distributed. This is implicitly a critique at all prior macroeconomics, which had used the “representative agent” assumption. All agents were represented by one agent. So all agents are approximately as wealthy as all others.

Obviously, this is not the case. This work was done in the late 90’s, when the topic of wealth inequality was not nearly as front-and-center as it is in, say, today’s election cycle. It’s interesting that one reason why it might have not been front and center was because prior to 1998, mainstream macroeconomic theory didn’t have an account of how there could be such inequality.

The Krusell-Smith model’s explanation for inequality is, it must be said, a politically conservative one. They introduce minute differences in utility discount factor. The discount factor is how much you discount future utility compared to today’s utility. If you have a big discount factor, you’re going to want to consume more today. If you have a small discount factor, you’re more willing to save for tomorrow.

Krussell and Smith show that teeny tiny differences in discount factor, even if they are subject to a random walk around a mean with some persistence within households, leads to huge wealth disparities. Their conclusion is that “Poor households are poor because they’ve chosen to be poor”, by not saving more for the future.

I’ve heard, like one does, all kinds of critiques of Economics as an ideological discipline. It’s striking to read a landmark paper in the field with this conclusion. It strikes directly against other mainstream political narratives. For example, there is no accounting of “privilege” or inter-generational transfer of social capital in this model. And while they acknowledge that in other papers there is the discussion of whether having larger amounts of household capital leads to larger rates of return, Kruselll and Smith sidestep this and make it about household saving.

The tools and methods in the paper are quite fascinating. I’m looking forward to more work in this domain.

References

Krusell, P., & Smith, Jr, A. A. (1998). Income and wealth heterogeneity in the macroeconomy. Journal of political Economy106(5), 867-896.