Explainable AI and computational approaches to macroeconomic theory

I have spent some time working with and around people concerned with the ethical implications of AI. A question that arises frequently in that context is to what extent automated decisions made by computational systems are “explainable” or “scrutable” (e.g. Selbst and Barocs, 2018). An important motivation for this line of inquiry is the idea that for AI systems to be effectively regulated by the Rule of Law, they need to be comprehensible to lawyers and understood within lawyerly discursive pracice (Hildebrandt, 2015). This is all very interesting, but analyses of the problem and its potential solutions rarely transcend the disciplinary silos from which the ‘explainability’ concerns originate. I’ve written my opinions about this quite a bit on this blog and I won’t reiterate them.

Instead, I’ve changed what I’m working now. Now I am contributing to open source software libraries for computational methods in macroeconomics, such as the Heterogeneous Agents Resources and toolKit (HARK). This is challenging and rewarding work. One reason why it is challenging and rewarding is how it bumps up against many key issues in the way computational methods are changing social sciences education. This is in many ways related to the explainable AI problem, though it’s in some sense the opposite side of the coin.

I’ll try to explain. Macroeconomic theory, which deals with such problems as how the economy as a whole reacts to changing trends in saving, consumption, and employment, and how agents within the economy react to those aggregate phenomena, has a long history associated with some major heavyweight economists: Keynes, Mankiw, etc. It is a deeply mathematical field that is taken seriously by central banks around the world and, by extension, private banks as well. Regulating the economy is an important job that requires expertise and is an intrinsically quantitatively understood operation; whatever one may think about the field of economics in general or its specific manifestations in history, it’s undeniable that the world needs economists of one kind or another.

So we have here a form of public policy expertise that is not discursive in the same sense that lawyerly practice is discursive. Economics has always imagined itself to be a science, however hotly contested that claim may be. It is also a field that does not shy away from having specialized disciplinary knowledge that must be accessed through demanding training. So economics would seem to be a good domain for computational methods to take root.

I’m finding that there are still challenges of interpretation in this field, but that they are somewhat different. Consider for now only the class of economic models that are built from a priori assumptions without any fitting to empirical data. Classically, economic models were constrained by their analytic tractability, meaning the ability of the economist to derive the results of the model through symbolic manipulation of the model’s mathematical terms. This led to the adoption of many assumptions of questionable realism, which have arguably led to some of the discrediting of economic theory since. But it also led to models that had closed form solutions, which have the dual advantage of being easy to compute (in terms of computational cost) and being easy to interpret, because the relationship between variables is explicit.

With computational models, the modeler has more flexibility. They can plug in the terms of the model and run a simulation to compute the result. But while the relationships between the input and output of the simulation may be observable in some sense in this case, the relationship is not proven. The simulation is not as good for purposes of exposition, or teaching, or explanation.

This is quite interesting, as it is a case where the explainability of a computational system is problematic but not because of a numeric or technical illiteracy on the part of the model reader, or of any intentional secrecy, but rather because of the complexity of the simulation (Burrell, 2016). For the purposes of this discussion, I’ve been discussing model building only, not model fitting, so the complexity in this case does not come from the noisiness of reality and the data it provides. Rather, the complexity results entirely from the internals of the model.

It is now a true word often spoken in jest that most machine learning today is some form of glorified (generalized) linear regression. The class of models considered by machine learning methods today is infinitely wide but ultimately shallow. Even when a need to understand the underlying phenomenon is abandoned, the available range of algorithms and hardware constraints limits machine-learnt models to those that are tractable by, say, a GPU.

But something else can be known.

References

Burrell, Jenna. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms.” Big Data & Society 3.1 (2016): 2053951715622512.

Hildebrandt, Mireille. Smart technologies and the end (s) of law: Novel entanglements of law and technology. Edward Elgar Publishing, 2015.

Selbst, Andrew D., and Solon Barocas. “The intuitive appeal of explainable machines.” Fordham L. Rev. 87 (2018): 1085.