I’m building something new
Push has come to shove, and I have started building something new.
I’m not building it alone, thank God, but it’s a very petite open source project at the moment.
I’m convinced that it is actually something new. Some of my colleagues are excited to hear that what I’m building will exist for them shortly. That’s encouraging! I also tried to build this thing in the context of another project that’s doing something similar. I was told that I was being too ambitious and couldn’t pull it off. That wasn’t exactly encouraging, but was good evidence that I’m actually doing something new. I will now do this thing and take the credit. So much the better for me.
What is it that I’m building?
Well, you see, it’s software for modeling economic systems. Or sociotechnical systems. Or, broadly speaking, complex systems with agents in them. Also, doing statistics with those models: fitting them to data, understanding what emergent properties occur in them, exploring counterfactuals, and so on.
I will try to answer some questions I wish somebody would ask me.
Q: Isn’t that just agent-based modeling? Why aren’t you just using NetLogo or something?
A: Agent-based modeling (ABM) is great, but it’s a very expansive term that means a lot of things. Very often, ABMs consist of agents whose behavior is governed by simple rules, rather directed towards accomplishing goals. That notion of “agent” in ABM is almost entirely opposed to the notion of “agent” used in AI — propagated by Stuart Russell, for example. To AI people, goal-directedness is essential for agency. I’m not committed to rational behavior in this framework — I’m not an economist! But I think a requirement to be able to train agents’ decision rules with respect to their goals.
There are a couple other ways in which I’m not doing paradigmatic ABM with this project. One is that I’m not focused on agents moving in 2D or 3D space. Rather, I’m much more interested in the settings defined by systems of structural equations. So, more continuous state spaces. I’m basing this work on years of contributing to heterogeneous agent macroeconomics tooling, and my frustrating with that paradigm. So, no turtles on patches. I anticipate spatial and even geospatial extensions to what I’m building would be really cool and useful. But I’m not there yet.
I think what I’m working on is ABM in the extended sense that Rob Axtell and Doyne Farmer use the term, and I hope to one day show them what I’m doing and for them to think it’s cool.
Q: Wait, is this about AI agents, as in Generative AI?
A: Ahaha… mainly no, but a little yes. I’m talking about “agents” in the more general sense used before the GenAI industry tried to make the word about them. I don’t see Generative AI or LLMs to be a fundamental part of what I’m building. However, I do see what I’m building as a tool for evaluating the economic impact and trustworthiness of GenAI systems by modeling their supply chains and social consequences. And I can imagine deeper integrations with “(generative) agentic AI” down the line. I am building a tool, and an LLM might engage it through “tool use”. It’s also I suppose possible to make the agents inside the model use LLMs somehow, though I don’t see a good reason for that at the moment.
Q: Does it use AI at all?
A: Yes! I mean, perhaps you know that “AI” has meant many things and much of what it has meant is now considered quite mundane. But it does use deep learning, which is something at “AI” means now. In particular, part of the core functionality that I’m trying to build into it is a flexible version of the deep learning econometrics methods invented not-too-long-ago by Lilia and Serguei Maliar. I hope to one day show this project to them, and for them to think it’s cool. Deep learning methods have become quite popular in economics, and this is in some ways yet-another-deep-learning-economics project. I hope it has a few features that distinguish it.
Q: How is this different from somebody else’s deep learning economics analysis package?
A: Great question! There are a few key ways that it’s different. One is that it’s designed around a clean separation between model definition and solution algorithms. There will be no model-specific solution code in this project. It’s truly intended to be library, comparable to scikit-learn, but for systems of agents. In fact, I’m calling this project scikit-agent. You heard it hear first!
Separating the model definitions from the solution algorithms means that there’s a lot more flexibility in how models are composed. This framework is based on the idea that parts of a model can be “blocks” which can be composed into more complex models. The “blocks” are bundles of structural equations, which can include state, control, and reward variables.
These ‘blocks’ are symbolically defined systems or environments. “Solving” the agent strategies in the multi-agent environment will be done with deep learning, otherwise known as artificial neural networks. So I think that it will be fair to call this framework a “neurosymbolic AI system”. I hope that saying that makes it easier to find funding for it down the line :)
Q: That sounds a little like causal game theory, or multi-agent influence diagrams. Are those part of this?
A: In fact, yes, so glad you asked. I think there’s a deep equivalence between multi-agent influence diagrams and ABM/computational economics which hasn’t been well explored. There are small notational differences that keep these communities from communicating. There are also a handful of substantively difficult theoretical issues that need to be settled with respect to, say, under what conditions a dynamic structure causal game can be solved using multi-agent reinforcement learning. These are cool problems, and I hope the thing I’m building implements good solutions to them.
Q: So, this is a framework for modeling dynamic Pearlian causal models, with multiple goal-directed agents, solving those models for agent strategy, and using those model econometrically?
A: Exactly.
Q: Does the thing you are building have any practical value? Or is it just more weird academic code?
A: I have high hopes that this thing I’m building could have a lot of practical value. Rigorous analysis of complex sociotechnical and economic systems remain hard problems. In finance, for example, as well as public policy, insurance, international relations, and other fields. I do hope what I’m building interfaces well with real data to help with significant decision-making. These are problems that Generative AI is quite bad at, I believe. I’m trying to build a strong, useful foundation for working with statistical models that include agents in them. This is more difficult than regression or even ‘transformer’-based learning from media, because the agents are solving optimization problems inside the model.
Q: What are the first applications you have in mind for this tool?
A: I’m motivated to build this because I think it’s needed to address questions in technology policy and design. This is the main subject of my NSF-funded research over the past several years. Here are some problems I’m actively working on which overlap with the scope of this tool:
- Integrating Differential Privacy and Contextual Integrity. I have a working paper with Rachel Cummings where we use Structural Causal Games (SCGs) to set up the parameter tuning of a differentially private system as a mechanism design problem. The tool I’m building will be great at representing structural causal games (SCG). With it, the theoretical technique can be used in practice.
- Understanding the Effects of Consumer Finance Policy. I’m working with an amazing team on a project modeling consumer lending and the effects of various consumer protection regulations. We are looking at anti-usury laws, nondiscrimination rules, forgiveness of negative information, the use of alternative data by fintech companies, and so on. This policy analysis involves comparing a number of different scenarios and looking for what regulations produce which results, robustly. I’m building a tool to solve this problem.
- AI Governance and Fiduciary Duties. A lot of people have pointed out that effective AI governance requires an understanding of AI supply chains. AI services rely on complex data flows through multiple actors, which are often imperfectly aligned and incompletely contracted. They also depend on physical data centers and the consumption of energy. This raises many questions around liability and quality control that wind up ultimately being about institutional design rather than neural network architectures. I’m building a tool to help reason through these institution design questions. In other words, I’m building a new way to do threat modeling on AI supply chain ecosystems.
Q: Really?
A: Yes! I sometimes have a hard time wrapping my own head around what I’m doing, which is why I’ve written out this blog post. But I do feel very good about what I’m working on at the moment. I think it has a lot of potential.
