Digifesto

Category: economics

from morality to economics: some stuff about Marx for Tapan Parikh

I work on a toolkit for heterogeneous agent structural modeling in Economics, Econ-ARK. In this capacity, I work with the project’s creators, who are economists Chris Carroll and Matt White. I think this project has a lot of promise and am each day more excited about its potential.

I am also often in academic circles where it’s considered normal to just insult the entire project of economics out of hand. I hear some empty, shallow snarking economists about once every two weeks. I find this kind of professional politics boring and distracting. It’d also often ignorant. I wanted to connect a few dots to try to remedy the situation, while also noting some substantive points that I think fill out some historical context.

Tracking back to this discussion of morality in the Western philosophical tradition and what challenges it today, the focal character there was Immanuel Kant, who for the sake of argument espoused a model of morality based on universal properties of a moral agent.

Tapan Parikh has argued (in personal communications) that I am “a dumb ass” for using Kant in this way, because Kant is on the record for writing some very racist things. I feel I have to address this point. No, I’m not going to stop working with the ideas from the Western philosophical canon just because so many of them were racist. I’m not a cancel culturist in any sense. I agree with Dave Chappelle on the subject of Louis C.K., for example.

However, it is actually essential to know whether or not racism is a substantive, logical problem with Kant’s philosophy. I’ll defer to others on this point. A quick Googling of the topic seems to indicate that either: Kant was inconsistent, and was a racist while also espousing universalist morality, and that tells us more about Kant the person than it does about universalist morality–the universalist morality transcending Kant’s human failings in this case (Allais, 2016) or Kant actually became less racist during the period in which he was most philosophically productive, which was late in his life (Kleingeld, 2007). I like this latter story better: Kant, being an 18th century German, was racist as hell; then he thought about it a bit harder, developed a universalist moral system, and because, as a consequence, less racist. That seems to be a positive endorsement of what we now call Kantian morality, which is a product of that later period and not the earlier virulently racist period.

Having hopefully settled that question, or at least smoothed it over sufficiently to move on, we can build in more context. Everybody knows this sequence:

Kant -> Hegel -> Marx

Kant starts a transcendent dialectic as a universalist moral project. Hegel historicizes that dialectic, in the process taking into serious consideration the Haitian rebellion, which inspires his account of the Master/Slave dialectic, which is quite literally about slavery and how it is undone by its internal contradictions. The problem, to make a long story short, is that the Master winds up being psychologically dependent on the Slave, and this gives the Slave power over the Master. The Slave’s rebellion is successful, as has happened in history many times. This line of thinking results in, if my notes are right (they might not be) Hegel’s endorsement of something that looks vaguely like a Republic as the end-of-history.

He dies in 1831, and Marx picks up this thread, but famously thinks the historical dialectic is material, not ideal. The Master/Slave dialectic is transposed onto the relationship between Capital and the Proletariat. Capital exploits the Proletariat, but needs the Proletariat. This is what enables the Proletariat to rebel. Once the Proletariat rebel, says Marx, everybody will be on the same level and there will be world peace. I.e., communism is the material manifestation of a universalist morality. This is what Marx inherits from Kant.

But wait, you say. Kant and Hegel were both German Idealists. Where did Marx get this materialist innovation? It was probably his own genius head, you say.

Wrong! Because there’s a thread missing here.

Recall that it was David Hume, a Scotsman, whose provocative skeptical ideas roused Kant from his “dogmatic slumber”. (Historical question: Was it Hume who made Kant “woke” in his old age?) Hume was in the line of Anglophone empiricism, which was getting very bourgey after the Whigs and Locke and all that. Buddies with Hume is Adam Smith who was, let’s not forget, a moral philosopher.

So while Kant is getting very transcendental, Smith is realizing that in order to do any serious moral work you have to start looking at material reality, and so he starts Economics in England.

This next part I didn’t really realize the significance of until digging into it. Smith dies in 1790, just around when Kant is completing the moral project he’s famous for. At that time, the next major figure is 18, coming of age. It’s David Ricardo: a Sephardic Jew turned Unitarian, a Whig, a businessman who makes a fortune speculating on the Battle of Waterloo, who winds up buying a seat in Parliament because you could do that then, and also winds up doing a lot of the best foundational work on economics including inventing the labor theory of value. He was also, incidentally, an abolitionist.

Which means that to complete one’s understanding of Marx, you have to also be thinking:

Hume -> Smith -> Ricardo -> Marx

In other words, Marx is the unlikely marriage of German Idealism, with its continued commitment to universalist ethics, with British empiricism which is–and I keep having to bring this up–weak on ethics. Empiricism is a bad way of building an ethical theory and it’s why the U.S. has bad privacy laws. But it’s a good way to build up an economic materialist view of history. Hence all of Marx’s time looking at factories.

It’s worth noting that Ricardo was also the one who came up with the idea of Land Value Taxation (LVT), which later Henry George popularized as the Single Tax in the late 19th/early 20th century. So Ricardo really is the pivotal figure here in a lot of ways.

In future posts, I hope to be working out more of the background of economics and its connection to moral philosophy. In addition to trying to make the connections to my work on Econ-ARK, there’s also resonances coming up in the policy space. For example, the Law and Political Economy community has been rather explicitly trying to bring back “political economy”–in the sense of Smith, Ricardo, and Marx–into legal scholarship, with a particular aim at regulating the Internet. These threads are braiding together.

References

Allais, L. (2016). Kant’s racism. Philosophical papers45(1-2), 1-36.

Kleingeld, P. (2007). Kant’s second thoughts on race. The Philosophical Quarterly57(229), 573-592.

Land value taxation

Henry George’s Progress and Poverty, first published in 1879, is dedicated

TO THOSE WHO, SEEING THE VICE AND MISERY THAT SPRING FROM THE UNEQUAL DISTRIBUTION OF WEALTH AND PRIVILEGE, FEEL THE POSSIBILITY OF A HIGHER SOCIAL STATE AND WOULD STRIVE FOR ITS ATTAINMENT

The book is best known as an articulation of the idea of a “Single Tax [on land]”, a circa 1900 populist movement to replace all taxes with a single tax on land value. This view influence many later land reform and taxation policies around the world; the modern name for this sort of policy is Land Value Taxation (LVT).

The gist of LVT is that the economic value of owning land comes both from the land itself and improvements built on top of it. The value of the underlying land over time is “unearned”–it does not require labor to maintain, it comes mainly from the artificial monopoly right over its use. This can be taxed and redistributed without distorting incentives in the economy.

Phillip Bess’s 2018 article provides an excellent summary of the economic arguments in favor of LVT. Michel Bauwen’s P2P Foundation article summaries where it has been successfully in place. Henry George was an American, but Georgism has been largely an export. General MacArthur was, it has been said, a Georgist, and this accounts for some of the land reform in Asian countries after World War II. Singapore, which owns and rents all of its land, is organized under roughly Georgist principles.

This policy is neither “left” nor “right”. Wikipedia has sprouted an article on geolibertarianism, a term that to me seems a bit sui generis. The 75th-anniversary edition of Progress and Poverty, published 1953, points out that one of the promises of communism is land reform, but it argues that this is a false promise. Rather, Georgist land reform is enlightened and compatible with market freedoms, etc.

I’ve recently dug up my copy of Progress and Poverty and begun to read it. I’m interested in mining it for ideas. What is most striking about it, to a contemporary reader, is the earnest piety of the author. Henry George was clearly a quite religious man, and wrote his lengthy and thorough political-economic analysis of land ownership out of a sincere belief that he was promoting a new world order which would preserve civilization from collapse under the social pressures of inequality.

A note towards formal modeling of informational capitalism

Cohen’s Between Truth and Power (2019) is enormously clarifying on all issues of the politics of AI, etc.

“The data refinery is only secondarily an apparatus for producing knowledge; it is principally an apparatus for producing wealth.”

– Julie Cohen, Between Truth and Power, 2019

Cohen lays out the logic of informational capitalism in comprehensive detail. Among her authoritatively argued points is that scholarly consideration of platforms, privacy, data science, etc. has focused on the scientific and technical accomplishments undergirding the new information economy, but that really its key institutions, the platform and the data refinery, are first and foremost legal and economic institutions. They exist as businesses; they are designed to “extract surplus”.

I am deeply sympathetic to this view. I’ve argued before that the ethical and political questions around AI are best looked at by considering computational institutions (1, 2). I think getting to the heart of the economic logic is the best way to understand the political and moral concerns raised by information capitalism. Many have argued that there is something institutionally amiss about informational capitalism (e.g. Strandburg, 2013); a recent CfP went so far as to say that the current market for data and AI is not “functional or sustainable.”

As far as I’m concerned, Cohen (2019) is the new gold standard for qualitative analysis of these issues. It is thorough. It is, as far as I can tell, correct. It is a dense and formidable work; I’m not through it yet. So while it may contain all the answers, I haven’t read them yet. This leaves me free to continue to think about how I would go about solving them.

My perspective is this: it will require social scientific progress to crack the right institutional design to settle informational capitalism in a satisfying way. Because computation is really at the heart of the activity of economic institutions, computation will need to be included within the social scientific models in question. But this is not something particularly new; rather, it’s implicitly already how things are done in many “hard” social science disciplines. Epstein (2006) draws the connections between classical game theoretic modeling and agent-based simulation, arguing that “The Computer is not the point”: rather, the point is that the models are defined in terms of mathematical equations, which are by foundational laws of computing amenable to being simulated or solved through computation. Hence, we have already seen a convergence of methods from “AI” into computational economics (Carroll, 2006) and sociology (Castelfranchi, 2001).

This position is entirely consistent with Abebe et al.’s analysis of “roles for computing in social change” (2020). In that paper, the authors are concerned with “social problems of justice and equity”, loosely defined, which can be potentially be addressed through “social change”. They defend the use of technical analysis and modeling as playing a positive role even according to the politics the Fairness, Accountability, and Transparency research community, which are particular. Abebe et al. address backlashes against uses of formalism such as that of Selbst et al. (2019); this rebuttal was necessary given the disciplinary fraughtness of the tech policy discourse.

What I am proposing in this note is something ever so slightly different. First, I am aiming at a different political problematic than the “social problems of justice and equity”. I’m trying to address the economic problems raised by Cohen’s analysis, such as the dysfunctionality of the data market. Second, I’d like to distinguish between “computing” in the method of solving mathematical model equations and “computing” as an element of the object of study, the computational institution (or platform, or data refinery, etc.) Indeed, it is the wonder and power of computation that it is possible to model one computational process within another. This point may be confusing for lawyers and anthropologists, but it should be clear to computational social scientists when we are talking about one or other, though our scientific language has not settled on a lexicon for this yet.

The next step for my own research here is to draw up a mathematical description of informational capitalism, or the stylized facts about it implied by Cohen’s arguments. This is made paradoxically both easier and more difficult by the fact that much of this work has already been done. A simple search of literature on “search costs”, “network effects”, “switching costs”, and so on, brings up a lot of fine work. The economists have not been asleep all this time. But then why has it taken so long for the policy critiques around informational capitalism, including those around informational capitalism and algorithmic opacity, to emerge?

I have two conflicting hypotheses, one quite gloomy and the other exciting. The gloomy view is that I’m simply in the wrong conversation. The correct conversation, the one that has adequately captured the nuances of the data economy already, is elsewhere–maybe in an economics conference in Zurich or something, and this discursive field of lawyers and computer scientists and ethicists is just effectively twiddling its thumbs and working on poorly framed problems because it hasn’t and can’t catch up with the other discourse.

The exciting view is that the problem of synthesizing the fragments of a solution from the various economists literatures with the most insight legal analyses is an unsolved problem ripe for attention.

Edit: It took me a few days, but I’ve found the correct conversation. It is Ross Anderson’s Workshop on Economics and Information Security. That makes perfect sense: Ross Anderson is a brilliant thinker in that arena. Naturally, as one finds, all the major results in this space are 10-20 years old. Quite probably, if I had found this one web page a couple years ago, my dissertation would have been written much differently–not so amateurishly.

It is supremely ironic to me how, in an economy characterized by a reduction in search costs, the search for the answers I’ve been looking for in information economics has been so costly for me.

References

Abebe, R., Barocas, S., Kleinberg, J., Levy, K., Raghavan, M., & Robinson, D. G. (2020, January). Roles for computing in social change. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 252-260).

Castelfranchi, C. (2001). The theory of social functions: challenges for computational social science and multi-agent learning. Cognitive Systems Research2(1), 5-38.

Carroll, C. D. (2006). The method of endogenous gridpoints for solving dynamic stochastic optimization problems. Economics letters91(3), 312-320.

Cohen, J. E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford University Press, USA.

Epstein, Joshua M. Generative social science: Studies in agent-based computational modeling. Princeton University Press, 2006.

Fraser, N. (2017). The end of progressive neoliberalism. Dissent2(1), 2017.

Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019, January). Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 59-68).

Strandburg, K. J. (2013). Free fall: The online market’s consumer preference disconnect. U. Chi. Legal F., 95.

Notes on Krussell & Smith, 1998 and macroeconomic theory

I’m orienting towards a new field through my work on HARK. A key paper in this field is Krusell and Smith, 1998 “Income and wealth heterogeneity in the macroeconomy.” The learning curve here is quite steep. These are, as usual, my notes as I work with this new material.

Krusell and Smith are approaching the problem of macroeconomic modeling on a broad foundation. Within this paradigm, the economy is imagined as a large collection of people/households/consumers/laborers. These exist at a high level of abstraction and are imagined to be intergenerationally linked. A household might be an immortal dynasty.

There is only one good: capital. Capital works in an interesting way in the model. It is produced every time period by a combination of labor and other capital. It is distributed to the households, apportioned as both a return on household capital and as a wage for labor. It is also consumed each period, for the utility of the households. So all the capital that exists does so because it was created by labor in a prior period, but then saved from immediate consumption, then reinvested.

In other words, capital in this case is essentially money. All other “goods” are abstracted way into this single form of capital. The key thing about money is that it can be saved and reinvested, or consumed for immediate utility.

Households also can labor, when they have a job. There is an unemployment rate and in the model households are uniformly likely to be employed or not, no matter how much money they have. The wage return on labor is determined by an aggregate economic productivity function. There are good and bad economic periods. These are determine exogenously and randomly. There are good times and bad times; employment rates are determined accordingly. One major impetus for saving is insurance for bad times.

The problem raised by Krusell and Smith in this, what they call their ‘baseline model’, is that because all households are the same, the equilibrium distribution of wealth is far too even compared with realistic data. It’s more normally distributed than log-normally distributed. This is implicitly a critique at all prior macroeconomics, which had used the “representative agent” assumption. All agents were represented by one agent. So all agents are approximately as wealthy as all others.

Obviously, this is not the case. This work was done in the late 90’s, when the topic of wealth inequality was not nearly as front-and-center as it is in, say, today’s election cycle. It’s interesting that one reason why it might have not been front and center was because prior to 1998, mainstream macroeconomic theory didn’t have an account of how there could be such inequality.

The Krusell-Smith model’s explanation for inequality is, it must be said, a politically conservative one. They introduce minute differences in utility discount factor. The discount factor is how much you discount future utility compared to today’s utility. If you have a big discount factor, you’re going to want to consume more today. If you have a small discount factor, you’re more willing to save for tomorrow.

Krussell and Smith show that teeny tiny differences in discount factor, even if they are subject to a random walk around a mean with some persistence within households, leads to huge wealth disparities. Their conclusion is that “Poor households are poor because they’ve chosen to be poor”, by not saving more for the future.

I’ve heard, like one does, all kinds of critiques of Economics as an ideological discipline. It’s striking to read a landmark paper in the field with this conclusion. It strikes directly against other mainstream political narratives. For example, there is no accounting of “privilege” or inter-generational transfer of social capital in this model. And while they acknowledge that in other papers there is the discussion of whether having larger amounts of household capital leads to larger rates of return, Kruselll and Smith sidestep this and make it about household saving.

The tools and methods in the paper are quite fascinating. I’m looking forward to more work in this domain.

References

Krusell, P., & Smith, Jr, A. A. (1998). Income and wealth heterogeneity in the macroeconomy. Journal of political Economy106(5), 867-896.

Herbert Simon and the missing science of interagency

Few have ever written about the transformation of organizations by information technology with the clarity of Herbert Simon. Simon worked at a time when disciplines were being reconstructed and a shift was taking place. Older models of economic actors as profit maximizing agents able to find their optimal action were giving way as both practical experience and the exact sciences told a different story.

The rationality employed by firms today is not the capacity to choose the best action–what Simon calls substantive rationality. It is the capacity to engage in steps to discover better ways of acting–procedural rationality.

So we proceed step by step from the simple caricature of the firm depicted in textbooks to the complexities of real firms in the real world of business. At each step towards realism, the problem gradually changes from choosing the right course of action (substantive rationality) to finding way of calculating, very approximately, where a good course of action lies (procedural rationality). With this shift, the theory of the firm becomes a theory of estimation under uncertainty and a theory of computation.

Simon goes on to briefly describe the fields that he believes are poised to drive the strategic behavior of firms. These are Operations Research (OR) and artificial intelligence (AI). The goal of both these fields is to translate problems into mathematical specifications that can be executed by computers. There is some variation within these fields as to whether they aim at satisficing solutions or perfect answers to combinatorial problems, but for the purposes to this article they are the same–certainly the fields have cross-pollinated much since 1969.

Simon’s analysis was prescient. The impact of OR and AI on organizations simply can’t be understated. My purpose in writing this is to point to the still unsolved analytical problems of this paradigm. Simon notes that the computational techniques he refers to percolate only so far up the corporate ladder.

OR and AI have been applied mainly to business decisions at the middle levels of management. A vast range of top management decisions (e..g. strategic decisions about investment, R&D, specialization and diversification, recruitment, development, and retention of managerial talent) are still mostly handled traditionally, that is, by experienced executives’ exercise of judgment.

Simon’s proposal for how to make these kinds of decisions more scientific is the paradigm of “expert systems”, which did not, as far as I know, take off. However, these were early days, and indeed at large firms AI techniques are used to make these kinds of executive decisions. Though perhaps equally, executives defend their own prerogative for human judgment, for better or for worse.

The unsolved scientific problem that I find very motivating is based on a subtle divergence of how the intellectual fields have proceeded. Surely economic value and consequences of business activities are wrapped up not in the behavior of an individual firm, but of many firms. Even a single firm contains many agents. While in the past the need for mathematical tractability led to assumptions of perfect rationality for these agents, we are now far past that and “the theory of the firm becomes a theory of estimation under uncertainty and a theory of computation.” But the theory of decision-making under uncertainty and the theory of computation are largely poised to address problems of the solving a single agent’s specific task. The OR or AI system fulfills a specific function of middle management; it does not, by and large, oversee the interactions between departments, and so on. The complexity of what is widely called “politics” is not captured yet within the paradigms of AI, though anybody with an ounce of practical experience would note that politics is part of almost any organizational life.

How can these kinds of problems be addressed scientifically? What’s needed is a formal, computational framework for modeling the interaction of heterogeneous agents, and a systematic method of comparing the validity of these models. Interagential activity is necessarily quite complex; this is complexity that does not fit well into any available machine learning paradigm.

References

Simon, H. A. (1969). The sciences of the artificial. Cambridge, MA.

Bridging between transaction cost and traditional economics

Some time ago I was trying to get my head around transaction cost economics (TCE) because of its implications for the digital economy and cybersecurity. (1, 2, 3, 4, 5). I felt like I had a good grasp of the relevant theoretical claim of TCE which is the interaction between asset specificity and the make-or-buy decision. But I didn’t have a good sense of the mechanism that drove that claim.

I worked it out yesterday.

Recall that in the make or buy decision, a firm is determining whether or not to make some product in-house or to buy it from the market. This is a critical decision made by software and data companies, as often these businesses operate by assembling components and data streams into a new kind of service; these services often are the components and data streams used in other firms. And so on.

The most robust claim of TCE is that if the asset (component, service, data stream) is very specific to the application of the firm, then the firm will be more likely to make it. If the asset is more general-purpose, then it buy it as a commodity on the market.

Why is this? TCE does not attempt to describe this phenomenon in a mathematical model, at least as far as I have found. Nevertheless, this can be worked out with a much more general model of the economy.

Assume that for some technical component there are fix costs f and marginal costs $c$. Consider two extreme cases: in case A, the asset is so specific that only one firm will want to buy it. In case B, the asset is very general so there’s many firms that want to purchase it.

In case A, a vendor will have costs of f + c and so will only make the good if the buyer can compensate them at least that much. At the point where the buyer is paying for both the fixed and marginal costs of the product, they might as well own it! If there are other discovered downstream uses for the technology, that’s a revenue stream. Meanwhile, since the vendor in this case will have lock-in power over the buyer (because switching will mean paying the fixed cost to ramp up a new vendor), that gives the vendor market power. So, better to make the asset.

In case B, there’s broader market demand. It’s likely that there’s already multiple vendors in place who have made the fixed cost investment. The price to the buying firm is going to be closer to c, the market price that converges over time to the fixed cost, as opposed to c =+ f, which includes the fixed costs. Because there are multiple vendors, lock-in is not such an issue. Hence the good becomes a commodity.

A few notes on the implications of this for the informational economy:

  • Software libraries have high fixed cost and low marginal cost. The tendency of companies to tilt to open source cores with their products built on top is a natural result of the market. The modularity of open source software is in part explained by the ways “asset specificity” is shaped exogenously by the kinds of problems that need to be solved. The more general the problem, the more likely the solution has been made available open source. Note that there is still an important transaction cost at work here, the search cost. There’s just so many software libraries.
  • Data streams can vary a great deal as to whether and how they are asset specific. When data streams are highly customized to the downstream buyer, they are specific; the customization is both costly to the vendor and adding value to the buyer. However, it’s rarely possible to just “make” data: it needs to be sourced from somewhere. When firms buy data, it is normally in a subscription model that takes into account industrial organization issues (such as lock in) within the pricing.
  • Engineering talent, and related labor costs, are interesting in that for a proprietary system, engineering human capital gains tend to be asset specific, while for open technologies engineering skill is a commodity. The structure of the ‘tech business’, which requires mastery of open technology in order to build upon it a proprietary system, is a key dynamic that drives the software engineering practice.

There are a number of subtleties I’m missing in this account. I mentioned search costs in software libraries. There’s similar costs and concerns about the inherent riskiness of a data product: by definition, a data product is resolving some uncertainty with respect to some other goal or values. It must always be a kind of credence good. The engineering labor market is quite complex in no small part because it is exposed to the complexities of its products.

State regulation and/or corporate self-regulation

The dust from the recent debates about whether regulation or industrial self-regulation in the data/tech/AI industry appears to be settling. The smart money is on regulation and self-regulation being complementary for attaining the goal of an industry dominated by responsible actors. This trajectory leads to centralized corporate power that is lead from the top; it is a Hamiltonian not Jeffersonian solution, in Pasquale’s terms.

I am personally not inclined towards this solution. But I have been convinced to see it differently after a conversation today about environmentally sustainable supply chains in food manufacturing. Nestle, for example, has been internally changing its sourcing practices to more sustainable chocolate. It’s able to finance this change from its profits, and when it does change its internal policy, it operates on a scale that’s meaningful. It is able to make this transition in part because non-profits, NGO’s, and farmers cooperatives lay through groundwork for sustainable sourcing external to the company. This lowers the barriers to having Nestle switch over to new sources–they have already been subsidized through philanthropy and international aid investments.

Supply chain decisions, ‘make-or-buy’ decisions, are the heart of transaction cost economics (TCE) and critical to the constitution of institutions in general. What this story about sustainable sourcing tells us is that the configuration of private, public, and civil society institutions is complex, and that there are prospects for agency and change in the reconfiguration of those relationships. This is no different in the ‘tech sector’.

However, this theory of economic and political change is not popular; it does not have broad intellectual or media appeal. Why?

One reason may be because while it is a critical part of social structure, much of the supply chain is in the private sector, and hence is opaque. This is not a matter of transparency or interpretability of algorithms. This is about the fact that private institutions, by virtue of being ‘private’, do not have to report everything that they do and, probably, shouldn’t. But since so much of what is done by the massive private sector is of public import, there’s a danger of the privatization of public functions.

Another reason why this view of political change through the internal policy-making of enormous private corporations is unpopular is because it leaves decision-making up to a very small number of people–the elite managers of those corporations. The real disparity of power involved in private corporate governance means that the popular attitude towards that governance is, more often than not, irrelevant. Even less so that political elites, corporate elites are not accountable to a constituency. They are accountable, I suppose, to their shareholders, which have material interests disconnected from political will.

This disconnected shareholder will is one of the main reasons why I’m skeptical about the idea that large corporations and their internal policies are where we should place our hopes for moral leadership. But perhaps what I’m missing is the appropriate intellectual framework for how this will is shaped and what drives these kinds of corporate decisions. I still think TCE might provide insights that I’ve been missing. But I am on the lookout for other sources.

Ordoliberalism and industrial organization

There’s a nice op-ed by Wolfgang Münchau in FT, “The crisis of modern liberalism is down to market forces”.

Among other things, it reintroduces the term “ordoliberalism“, a particular Germanic kind of enlightened liberalism designed to prevent the kind of political collapse that had precipitated the war.

In Münchau’s account, the key insight of ordoliberalism is its attention to questions of social equality, but not through the mechanism of redistribution. Rather, ordoliberal interventions primarily effect industrial organization, favoring small to mid- sized companies.

As Germany’s economy remains robust and so far relatively politically stable, it’s interesting that ordoliberalism isn’t discussed more.

Another question that must be asked is to what extent the rise of computational institutions challenges the kind of industrial organization recommended by ordoliberalism. If computation induces corporate concentration, and there are not good policies for addressing that, then that’s due to a deficiency in our understanding of what ‘market forces’ are.