One of the things that made the research traditions of cognitive science and artificial intelligence so great was the duality between them.
Cognitive science tried to understand the mind at the same time that artificial intelligence tried to discover methods for reproducing the functions of cognition artificially. Artificial intelligence techniques became hypotheses for how the mind worked, and empirically confirmed theories of how the mind worked inspired artificial intelligence techniques.
There was a lot of criticism of these fields at one point. Writers like Hubert Dreyfus, Lucy Suchman, and Winograd and Flores critiqued especially heavily one paradigm that’s now called “Good Old Fashioned AI”–the kind of AI that used static, explicit representations of the world instead of machine learning.
That was a really long time ago and now machine learning and cognitive psychology (including cognitive neuroscience) are in happy conversation, with much more successful models of learning that by and large have absorbed the critiques of earlier times.
Some people think that these old critiques still apply to modern methods in AI. Isn’t AI still AI? I believe the main confusion is that lots of people don’t know that “computable” means something very precisely mathematical: it means a function that is calculable by a partial recursive function. It just so happens that computers, the devices we know and love, can compute any computable function.
So what changed in AI was not that they were using computation to solve problems, but the way they used computation. Similarly, while there was a period where cognitive psychology tried to model mental processes using a particular kind of computable representation, and these models are now known to be inaccurate, that doesn’t mean that the mind doesn’t perform other forms of computation.
A similar kind of relationship is going on between the study of complex systems, especially complex social systems, and the techniques of multi-agent system modeling. Multi-agent system modeling is, as Epstein clarifies, about generative modeling of social processes that is computable in the mathematical sense, but the fact that physical computers are involved is incidental. Multi-agent systems are supposed to be a more realistic way of modeling agent interactions than, say, neoclassical game theory, in the same way that machine learning is a more realistic way of modeling cognition than GOFAI.
Given that, despite (or, more charitably because of) the critiques leveled against it, cognitive science and artificial intelligence have developed into widely successful and highly respected fields. We should expect complex systems/multi-agent systems research to follow a similar trajectory.