Digifesto

Tag: cybernetics

Ashby’s Law and AI control

I’ve recently discovered Ashby’s Law, also know as the First Law of Cybernetics, by reading Stafford Beer’s “Designing Freedom” lectures. Ashby’s Law is a powerful idea, one I’ve been grasping at intuitively for some time. For example, here I was looking for something like it and thought I could get it from the Data Processing Inequality in information theory. I have not yet grokked the mathematical definition of Ashby’s Law, which I gather is in Ross Ashby’s An Introduction to Cybernetics. Though I am not sure yet, I expect the formulation there can use an update. But if I am right about its main claims, I think the argument of this post will stand.

Ashby’s Law is framed in terms of ‘variety’, which is the number of states that it is possible for a system to be in. A six-sided die has six possible states (if you’re just looking at the top of it). A laptop has many more. A brain has many more even than that. A complex organization with many people in it, all with laptops, has even more. And so on.

The law can be stated in many ways. One of them is that:

When the variety or complexity of the environment exceeds the capacity of a system (natural or artificial) the environment will dominate and ultimately destroy that system.

The law is about the relationship between a system and its environment. Or, in another sense, it is about a system to be controlled and a different system that tries to control that system. The claim is that the control unit needs to have at least as much variety as the system to be controlled for it to be effective.

This reminds me of an argument I had with a superintelligence theorist back when I was thinking about such things. The Superintelligence people, recall, worry about an AI getting the ability to improve itself recursively and causing an “intelligence explosion”. Its own intelligence, so to speak, explodes, surpassing all other intelligent life and giving it total domination over the fate of humanity.

Here is the argument that I posed a few years ago, reframed in terms of Ashby’s Law:

  • The AI in question is a control unit, C, and the world it would control is the system, S.
  • For the AI to have effective domination over S, C would need at least as much variety as S.
  • But S includes C within it. The control unit is part of the larger world.
  • Hence, no C can perfectly control S.

Superintelligence people will no doubt be unsatisfied by this argument. The AI need not be effective in the sense dictated by Ashby’s Law. It need only be capable of outmaneuvering humans. And so on.

However, I believe the argument gets at why it is difficult for complex control systems to ever truly master the world around them. It is very difficult for a control system to have effective control over itself, let alone itself in a larger systemic context, without some kind of order constraining the behavior of the total system (the system including the control unit) imposed from without. The idea that it is possible to gain total mastery or domination through an AI or better data systems is a fantasy because the technical controls adds their own complexity to the world that is to be controlled.

This is a bit of a paradox, as it raises the question of how any control unites work at all. I’ll leave this for another day.

second-order cybernetics

The mathematical foundations of modern information technology are:

  • The logic of computation and complexity, developed by Turing, Church, and others. These mathematics specify the nature and limits of the algorithm.
  • The mathematics of probability and, by extension, information theory. These specify the conditions and limitations of inference from evidence, and the conditions and limits of communication.

Since the discovery of these mathematical truths and their myriad application, there have been those that have recognized that these truths apply both to physical objects, such as natural life and artificial technology, and also to lived experience, mental concepts, and social life. Humanity and nature obey the same discoverable, mathematical logic. This allowed for a vision of a unified science of communication and control: cybernetics.

There have been many intellectual resistance to these facts. One of the most cogent is Understanding Computers and Cognition, by Terry Winograd and Fernando Flores. Terry Winograd is the AI professor who advised the founders of Google. His credentials are beyond question. And so the fact that he coauthored a critique of “rationalist” artificial intelligence with Fernando Flores, Chilean entrepreneur, politician, and philosophy PhD , is significant. In this book, the two authors base their critique of AI on the work of Humberto Maturana, a second-order cyberneticist who believed that life’s organization and phenomenology could be explained by a resonance between organism and environment, structural coupling. Theories of artificial intelligence are incomplete when not embedded in a more comprehensive theory of the logic of life.

I’ve begun studying this logic, which was laid out by Francisco Varela in 1979. Notably, like the other cybernetic logics, it is an account of both physical and phenomenological aspects of life. Significantly Varela claims that his work is a foundation for an observer-inclusive science, which addresses some of the paradoxes of the physicist’s conception of the universe and humanity’s place in it.

My hunch is that these principles can be applied to social scientific phenomena as well, as organizations are just organisms bigger than us. This is a rather strong claim and difficult to test. However, it seems to me after years of study the necessary conclusion of available theory. It also seems consistent with recent trends in economics towards complexity and institutional economics, and the intuition that’s now rather widespread that the economy functions as a complex ecosystem.

This would be a victory for science if we could only formalize these intuitions well enough to either make these theories testable, or to be so communicable as to be recognized as ‘proved’ by any with the wherewithal to study it.