Digifesto

Tag: artificial intelligence

Managerialism as political philosophy

Technologically mediated spaces and organizations are frequently described by their proponents as alternatives to the state. From David Clark’s maxim of Internet architecture, “We reject: kings, presidents and voting. We believe in: rough consensus and running code”, to cyberanarchist efforts to bypass the state via blockchain technology, to the claims that Google and Facebook, as they mediate between billions of users, are relevant non-state actor in international affairs, to Lessig’s (1999) ever prescient claim that “Code is Law”, there is undoubtedly something going on with technology’s relationship to the state which is worth paying attention to.

There is an intellectual temptation (one that I myself am prone to) to take seriously the possibility of a fully autonomous technological alternative to the state. Something like a constitution written in source code has an appeal: it would be clear, precise, and presumably based on something like a consensus of those who participate in its creation. It is also an idea that can be frightening (Give up all control to the machines?) or ridiculous. The example of The DAO, the Ethereum ‘distributed autonomous organization’ that raised millions of dollars only to have them stolen in a technical hack, demonstrates the value of traditional legal institutions which protect the parties that enter contracts with processes that ensure fairness in their interpretation and enforcement.

It is more sociologically accurate, in any case, to consider software, hardware, and data collection not as autonomous actors but as parts of a sociotechnical system that maintains and modifies it. This is obvious to practitioners, who spend their lives negotiating the social systems that create technology. For those for whom it is not obvious, there’s reams of literature on the social embededness of “algorithms” (Gillespie, 2014; Kitchin, 2017). These themes are recited again in recent critical work on Artificial Intelligence; there are those that wisely point out that a functioning artificially intelligent system depends on a lot of labor (those who created and cleaned data, those who built the systems they are implemented on, those that monitor the system as it operates) (Kelkar, 2017). So rather than discussing the role of particular technologies as alternatives to the state, we should shift our focus to the great variety of sociotechnical organizations.

One thing that is apparent, when taking this view, is that states, as traditionally conceived, are themselves sociotechnical organizations. This is, again, an obvious point well illustrated in economic histories such as (Beniger, 1986). Communications infrastructure is necessary for the control and integration of society, let alone effective military logistics. The relationship between those industrial actors developing this infrastructure, whether it be building roads, running a postal service, laying rail or telegram wires, telephone wires, satellites, Internet protocols, and now social media–and the state has always been interesting and a story of great fortunes and shifts in power.

What is apparent after a serious look at this history is that political theory, especially liberal political theory as it developed in the 1700’s an onward as a theory of the relationship between individuals bound by social contract emerging from nature to develop a just state, leaves out essential scientific facts of the matter of how society has ever been governed. Control of communications and control infrastructure has never been equally dispersed and has always been a source of power. Late modern rearticulations of liberal theory and reactions against it (Rawls and Nozick, both) leave out technical constraints on the possibility of governance and even the constitution of the subject on which a theory of justice would have its ground.

Were political theory to begin from a more realistic foundation, it would need to acknowledge the existence of sociotechnical organizations as a political unit. There is a term for this view, “managerialism“, which, as far as I can tell is used somewhat pejoratively, like “neoliberalism”. As an “-ism”, it’s implied that managerialism is an ideology. When we talk about ideologies, what we are doing is looking from an external position onto an interdependent set of beliefs in their social context and identifying, through genealogical method or logical analysis, how those beliefs are symptoms of underlying causes that are not precisely as represented within those beliefs themselves. For example, one critiques neoliberal ideology, which purports that markets are the best way to allocate resources and advocates for the expansion of market logic into more domains of social and political life, but pointing out that markets are great for reallocating resources to capitalists, who bankroll neoliberal ideologues, but that many people who are subject to neoliberal policies do not benefit from them. While this is a bit of a parody of both neoliberalism and the critiques of it, you’ll catch my meaning.

We might avoid the pitfalls of an ideological managerialism (I’m not sure what those would be, exactly, having not read the critiques) by taking from it, to begin with, only the urgency of describing social reality in terms of organization and management without assuming any particular normative stake. It will be argued that this is not a neutral stance because to posit that there is organization, and that there is management, is to offend certain kinds of (mainly academic) thinkers. I get the sense that this offendedness is similar to the offense taken by certain critical scholars to the idea that there is such a thing as scientific knowledge, especially social scientific knowledge. Namely, it is an offense taken to the idea that a patently obvious fact entails ones own ignorance of otherwise very important expertise. This is encouraged by the institutional incentives of social science research. Social scientists are required to maintain an aura of expertise even when their particular sub-discipline excludes from its analysis the very systems of bureaucratic and technical management that its university depends on. University bureaucracies are, strangely, in the business of hiding their managerialist reality from their own faculty, as alternative avenues of research inquiry are of course compelling in their own right. When managerialism cannot be contested on epistemic grounds (because the bluff has been called), it can be rejected on aesthetic grounds: managerialism is not “interesting” to a discipline, perhaps because it does not engage with the personal and political motivations that constitute it.

What sets managerialism aside from other ideologies, however, is that when we examine its roots in social context, we do not discover a contradiction. Managerialism is not, as far as I can tell, successful as a popular ideology. Managerialism is attractive only to that rare segment of the population that work closely with bureaucratic management. It is here that the technical constraints of information flow and its potential uses, the limits of autonomy especially as it confronts the autonomies of others, the persistence of hierarchy despite the purported flattening of social relations, and so on become unavoidable features of life. And though one discovers in these situations plenty of managerial incompetence, one also comes to terms with why that incompetence is a necessary feature of the organizations that maintain it.

Little of what I am saying here is new, of course. It is only new in relation to more popular or appealing forms of criticism of the relationship between technology, organizations, power, and ethics. So often the political theory implicit in these critiques is a form of naive egalitarianism that sees a differential in power as an ethical red flag. Since technology can give organizations a lot of power, this generates a lot of heat around technology ethics. Starting from the perspective of an ethicist, one sees an uphill battle against an increasingly inscrutable and unaccountable sociotechnical apparatus. What I am proposing is that we look at things a different way. If we start from general principles about technology its role in organizations–the kinds of principles one would get from an analysis of microeconomic theory, artificial intelligence as a mathematical discipline, and so on–one can try to formulate managerial constraints that truly confront society. These constraints are part of how subjects are constituted and should inform what we see as “ethical”. If we can broker between these hard constraints and the societal values at stake, we might come up with a principle of justice that, if unpopular, may at least be realistic. This would be a contribution, at the end of the day, to political theory, not as an ideology, but as a philosophical advance.

References

Beniger, James R. “The Control Revolution: Technological and Economic Origins of the.” Information Society (1986).

Bird, Sarah, et al. “Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI.” (2016).

Gillespie, Tarleton. “The relevance of algorithms.” Media technologies: Essays on communication, materiality, and society 167 (2014).

Kelkar, Shreeharsh. “How (Not) to Talk about AI.” Platypus, 12 Apr. 2017, blog.castac.org/2017/04/how-not-to-talk-about-ai/.

Kitchin, Rob. “Thinking critically about and researching algorithms.” Information, Communication & Society 20.1 (2017): 14-29.

Lessig, Lawrence. “Code is law.” The Industry Standard 18 (1999).

Advertisements

artificial life, artificial intelligence, artificial society, artificial morality

“Everyone” “knows” what artificial intelligence is and isn’t and why it is and isn’t a transformative thing happening in society and technology and industry right now.

But the fact is that most of what “we” “call” artificial intelligence is really just increasingly sophisticated ways of solving a single class of problems: optimization.

Essentially what’s happened in AI is that all empirical inference problems can be modeled as Bayesian problems, which are then solved using variational inference methods, which are essentially just turning the Bayesian statistic problem into a solvable form of an optimization problem, and solving it.

Advances in optimization have greatly expanded the number of things computers can accomplish as part of a weak AI research agenda.

Frequently these remarkable successes in Weak AI are confused with an impending revolution in what used to be called Strong AI but which now is more frequently called Artificial General Intelligence, or AGI.

Recent interest in AGI has spurred a lot of interesting research. How could it not be interesting? It is also, for me, extraordinarily frustrating research because I find the philosophical precommitments of most AGI researchers baffling.

One insight that I wish made its way more frequently into discussions of AGI is an insight made by the late Francisco Varela, who argued that you can’t really solve the problem of artificial intelligence until you have solved the problem of artificial life. This is for the simple reason that only living things are really intelligent in anything but the weak sense of being capable of optimization.

Once being alive is taken as a precondition for being intelligent, the problem of understanding AGI implicates a profound and fascinating problem of understanding the mathematical foundations of life. This is a really amazing research problem that for some reason is never ever discussed by anybody.

Let’s assume it’s possible to solve this problem in a satisfactory way. That’s a big If!

Then a theory of artificial general intelligence should be able to show how some artificial living organisms are and others are not intelligent. I suppose what’s most significant here is the shift in thinking of AI in terms of “agents”, a term so generic as to be perhaps at the end of the day meaningless, to thinking of AI in terms of “organisms”, which suggests a much richer set of preconditions.

I have similar grief over contemporary discussion of machine ethics. This is a field with fascinating, profound potential. But much of what machine ethics boils down to today are trolley problems, which are as insipid as they are troublingly intractable. There’s other, better machine ethics research out there, but I’ve yet to see something that really speaks to properly defining the problem, let alone solving it.

This is perhaps because for a machine to truly be ethical, as opposed to just being designed and deployed ethically, it must have moral agency. I don’t mean this in some bogus early Latourian sense of “wouldn’t it be fun if we pretended seatbelts were little gnomes clinging to our seats” but in an actual sense of participating in moral life. There’s a good case to be made that the latter is not something easily reducible to decontextualized action or function, but rather has to do with how own participates more broadly in social life.

I suppose this is a rather substantive metaethical claim to be making. It may be one that’s at odds with common ideological trainings in Anglophone countries where it’s relatively popular to discuss AGI as a research problem. It has more in common, intellectually and philosophically, with continental philosophy than analytic philosophy, whereas “artificial intelligence” research is in many ways a product of the latter. This perhaps explains why these two fields are today rather disjoint.

Nevertheless, I’d happily make the case that the continental tradition has developed a richer and more interesting ethical tradition than what analytic philosophy has given us. Among other reasons this is because of how it is able to situated ethics as a function of a more broadly understood social and political life.

I postulate that what is characteristic of social and political life is that it involves the interaction of many intelligent organisms. Which of course means that to truly understand this form of life and how one might recreate it artificially, one must understand artificial intelligence and, transitively, artificial life.

Only one artificial society is sufficiently well-understood could we then approach the problem of artificial morality, or how to create machines that truly act according to moral or ethical ideals.

autonomy and immune systems

Somewhat disillusioned lately with the inflated discourse on “Artificial Intelligence” and trying to get a grip on the problem of “collective intelligence” with others in the Superintelligence and the Social Sciences seminar this semester, I’ve been following a lead (proposed by Julian Jonker) that perhaps the key idea at stake is not intelligence, but autonomy.

I was delighted when searching around for material on this to discover Bourgine and Varela’s “Towards a Practice of Autonomous Systems” (pdf link) (1992). Francisco Varela is one of my favorite thinkers, though he is a bit fringe on account of being both Chilean and unafraid of integrating Buddhism into his scholarly work.

The key point of the linked paper is that for a system (such as a living organism, but we might extend the idea to a sociotechnical system like an institution or any other “agent” like an AI) to be autonomous, it has to have a kind of operational closure over time–meaning not that it is closed to interaction, but that its internal states progress through some logical space–and that it must maintain its state within a domain of “viability”.

Though essentially a truism, I find it a simple way of thinking about what it means for a system to preserve itself over time. What we gain from this organic view of autonomy (Varela was a biologist) is an appreciation of the fact that an agent needs to adapt simply in order to survive, let alone to act strategically or reproduce itself.

Bourgine and Varela point out three separate adaptive systems to most living organisms:

  • Cognition. Information processing that determines the behavior of the system relative to its environment. It adapts to new stimuli and environmental conditions.
  • Genetics. Information processing that determines the overall structure of the agent. It adapts through reproduction and natural selection.
  • The Immune system. Information processing to identify invasive micro-agents that would threaten the integrity of the overall agent. It creates internal antibodies to shut down internal threats.

Sean O Nuallain has proposed that ones sense of personal self is best thought of as a kind of immune system. We establish a barrier between ourselves and the world in order to maintain a cogent and healthy sense of identity. One could argue that to have an identity at all is to have a system of identifying what is external to it and rejecting it. Compare this with psychological ideas of ego maintenance and Jungian confrontations with “the Shadow”.

At an social organizational level, we can speculate that there is still an immune function at work. Left and right wing ideologies alike have cultural “antibodies” to quickly shut down expressions of ideas that pattern match to what might be an intellectual threat. Academic disciplines have to enforce what can be said within them so that their underlying theoretical assumptions and methodological commitments are not upset. Sociotechnical “cybersecurity” may be thought of as a kind of immune system. And so on.

Perhaps the most valuable use of the “immune system” metaphor is that it identifies a mid-range level of adaptivity that can be truly subconscious, given whatever mode of “consciousness” you are inclined to point to. Social and psychological functions of rejection are in a sense a condition for higher-level cognition. At the same time, this pattern of rejection means that some information cannot be integrated materially; it must be integrated, if at all, through the narrow lens of the senses. At an organizational or societal level, individual action may be rejected because of its disruptive effect on the total system, especially if the system has official organs for accomplishing more or less the same thing.

Instrumentality run amok: Bostrom and Instrumentality

Narrowing our focus onto the crux of Bostrom’s argument, we can see how tightly it is bound to a much older philosophical notion of instrumental reason. This comes to the forefront in his discussion of the orthogonality thesis (p.107):

The orthogonality thesis
Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.

Bostrom goes on to clarify:

Note that the orthogonality thesis speaks not of rationality or reason, but of intelligence. By “intelligence” we here mean something like skill at prediction, planning, and means-ends reasoning in general. This sense of instrumental cognitive efficaciousness is most relevant when we are seeking to understand what the causal impact of a machine superintelligence might be.

Bostrom maintains that the generality of instrumental intelligence, which I would argue is evinced by the generality of computing, gives us a way to predict how intelligent systems will act. Specifically, he says that an intelligent system (and specifically a superintelligent) might be predictable because of its design, because of its inheritance of goals from a less intelligence system, or because of convergent instrumental reasons. (p.108)

Return to the core logic of Bostrom’s argument. The existential threat posed by superintelligence is simply that the instrumental intelligence of an intelligent system will invest in itself and overwhelm any ability by us (its well-intentioned creators) to control its behavior through design or inheritance. Bostrom thinks this is likely because instrumental intelligence (“skill at prediction, planning, and means-ends reasoning in general”) is a kind of resource or capacity that can be accumulated and put to other uses more widely. You can use instrumental intelligence to get more instrumental intelligence; why wouldn’t you? The doomsday prophecy of a fast takeoff superintelligence achieving a decisive strategic advantage and becoming a universe-dominating singleton depends on this internal cycle: instrumental intelligence investing in itself and expanding exponentially, assuming low recalcitrance.

This analysis brings us to a significant focal point. The critical missing formula in Bostrom’s argument is (specifically) the recalcitrance function of instrumental intelligence. This is not the same as recalcitrance with respect to “general” intelligence or even “super” intelligence. Rather, what’s critical is how much a process dedicated to “prediction, planning, and means-ends reasoning in general” can improve its own capacities at those things autonomously. The values of this recalcitrance function will bound the speed of superintelligence takeoff. These bounds can then inform the optimal allocation of research funding towards anticipation of future scenarios.


In what I hope won’t distract from the logical analysis of Bostrom’s argument, I’d like to put it in a broader context.

Take a minute to think about the power of general purpose computing and the impact it has had on the past hundred years of human history. As the earliest digital computers were informed by notions of artificial intelligence (c.f. Alan Turing), we can accurately say that the very machine I use to write this text, and the machine you use to read it, are the result of refined, formalized, and materialized instrumental reason. Every programming language is a level of abstraction over a machine that has no ends in itself, but which serves the ends of its programmer (when it’s working). There is a sense in which Bostrom’s argument is not about a near future scenario but rather is just a description of how things already are.

Our very concepts of “technology” and “instrument” are so related that it can be hard to see any distinction at all. (c.f. Heidegger, “The Question Concerning Technology“) Bostrom’s equating of instrumentality with intelligence is a move that makes more sense as computing becomes ubiquitously part of our experience of technology. However, if any instrumental mechanism can be seen as a form of intelligence, that lends credence to panpsychist views of cognition as life. (c.f. the Santiago theory)

Meanwhile, arguably the genius of the market is that it connects ends (through consumption or “demand”) with means (through manufacture and services, or “supply”) efficiently, bringing about the fruition of human desire. If you replace “instrumental intelligence” with “capital” or “money”, you get a familiar critique of capitalism as a system driven by capital accumulation at the expense of humanity. The analogy with capital accumulation is worthwhile here. Much as in Bostrom’s “takeoff” scenarios, we can see how capital (in the modern era, wealth) is reinvested in itself and grows at an exponential rate. Variable rates of return on investment lead to great disparities in wealth. We today have a “multipolar scenario” as far as the distribution of capital is concerned. At times people have advocated for an economic “singleton” through a planned economy.

It is striking that contemporary analytic philosopher and futurist Nick Bostrom’s contemplates the same malevolent force in his apocalyptic scenario as does Max Horkheimer in his 1947 treatise “Eclipse of Reason“: instrumentality run amok. Whereas Bostrom concerns himself primarily with what is literally a machine dominating the world, Horkheimer sees the mechanism of self-reinforcing instrumentality as pervasive throughout the economic and social system. For example, he sees engineers as loci of active instrumentalism. Bostrom never cites Horkheimer, let alone Heidegger. That there is a convergence of different philosophical sub-disciplines on the same problem suggests that there are convergent ultimate reasons which may triumph over convergent instrumental reasons in the end. The question of what these convergent ultimate reasons are, and what their relationship to instrumental reasons is, is a mystery.

Bostrom’s Superintelligence: Definitions and core argument

I wanted to take the opportunity to spell out what I see as the core definitions and argument of Bostrom’s Superintelligence as a point of departure for future work. First, some definitions:

  • Superintelligence. “We can tentatively define a superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” (p.22)
  • Speed superintelligence. “A system that can do all that a human intellect can do, but much faster.” (p.53)
  • Collective superintelligence. “A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system.” (p.54)
  • Quality superintelligence. “A system that is at least as fast as a human mind and vastly qualitatively smarter.” (p.56)
  • Takeoff. The event of the emergence of a superintelligence. The takeoff might be slow, moderate, or fast, depending on the conditions under which it occurs.
  • Optimization power and Recalcitrance. Bostrom’s proposed that we model the speed of superintelligence takeoff as: Rate of change in intelligence = Optimization power / Recalcitrance. Optimization power refers to the effort of improving the intelligence of the system. Recalcitrance refers to the resistance of the system to being optimized.(p.65, pp.75-77)
  • Decisive strategic advantage. The level of technological and other advantages sufficient to enable complete world domination. (p.78)
  • Singleton. A world order in which there is at the global level one decision-making agency. (p.78)
  • The wise-singleton sustainability threshold. “A capability set exceeds the wise-singleton threshold if and only if a patient and existential risk-savvy system with that capability set would, if it faced no intelligent opposition or competition, be able to colonize and re-engineer a large part of the accessible universe.” (p.100)
  • The orthogonality thesis. “Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.” (p.107)
  • The instrumental convergence thesis. “Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.” (p.109)

Bostrom’s core argument in the first eight chapters of the book, as I read it, is this:

  1. Intelligent systems are already being built and expanded on.
  2. If some constant proportion of a system’s intelligence is turned into optimization power, then if the recalcitrance of the system is constant or lower, then the intelligence of the system will increase at an exponential rate. This will be a fast takeoff.
  3. Recalcitrance is likely to be lower for machine intelligence than human intelligence because of the physical properties of artificial computing systems.
  4. An intelligent system is likely to invest in its own intelligence because of the instrumental convergence thesis. Improving intelligence is an instrumental goal given a broad spectrum of other goals.
  5. In the event of a fast takeoff, it is likely that the superintelligence will get a decisive strategic advantage, because of a first-mover advantage.
  6. Because of the instrumental convergence thesis, we should expect a superintelligence with a decisive strategic advantage to become a singleton.
  7. Machine superintelligences, which are more likely to takeoff fast and become singletons, are not likely to create nice outcomes for humanity by default.
  8. A superintelligent singleton is likely to be above the wise-singleton threshold. Hence the fate of the universe and the potential of humanity is at stake.

Having made this argument, Bostrom goes on to discuss ways we might anticipate and control the superintelligence as it becomes a singleton, thereby securing humanity.

a new kind of scientism

Thinking it over, there are a number of problems with my last post. One was the claim that the scientism addressed by Horkheimer in 1947 is the same as the scientism of today.

Scientism is a pejorative term for the belief that science defines reality and/or is a solution to all problems. It’s not in common use now, but maybe it should be among the critical thinkers of today.

Frankfurt School thinkers like Horkheimer and Habermas used “scientism” to criticize the positivists, the 20th century philosophical school that sought to reduce all science and epistemology to formal empirical methods, and to reduce all phenomena, including social phenomena, to empirical science modeled on physics.

Lots of people find this idea offensive for one reason or another. I’d argue that it’s a lot like the idea that algorithms can capture all of social reality or perform the work of scientists. In some sense, “data science” is a contemporary positivism, and the use of “algorithms” to mediate social reality depends on a positivist epistemology.

I don’t know any computer scientists that believe in the omnipotence of algorithms. I did get an invitation to this event at UC Berkeley the other day, though:

This Saturday, at [redacted], we will celebrate the first 8 years of the [redacted].

Current students, recent grads from Berkeley and Stanford, and a group of entrepreneurs from Taiwan will get together with members of the Social Data Lab. Speakers include [redacted], former Palantir financial products lead and course assistant of the [redacted]. He will reflect on how data has been driving transforming innovation. There will be break-out sessions on sign flips, on predictions for 2020, and on why big data is the new religion, and what data scientists need to learn to become the new high priests. [emphasis mine]

I suppose you could call that scientistic rhetoric, though honestly it’s so preposterous I don’t know what to think.

Though I would recommend to the critical set the term “scientism”, I’m ambivalent about whether it’s appropriate to call the contemporary emphasis on algorithms scientistic for the following reason: it might be that ‘data science’ processes are better than the procedures developed for the advancement of physics in the mid-20th century because they stand on sixty years of foundational mathematical work with modeling cognition as an important aim. Recall that the AI research program didn’t start until Chomsky took down Skinner. Horkheimer quotes Dewey commenting that until naturalist researchers were able to use their methods to understand cognition, they wouldn’t be able to develop (this is my paraphrase:) a totalizing system. But the foundational mathematics of information theory, Bayesian statistics, etc. are robust enough or could be robust enough to simply be universally intersubjectively valid. That would mean data science would stand on transcendental not socially contingent grounds.

That would open up a whole host of problems that take us even further back than Horkheimer to early modern philosophers like Kant. I don’t want to go there right now. There’s still plenty to work with in Horkheimer, and in “Conflicting panaceas” he points to one of the critical problems, which is how to reconcile lived reality in its contingency with the formal requirements of positivist or, in the contemporary data scientific case, algorithmic epistemology.

Know-how is not interpretable so algorithms are not interpretable

I happened upon Hildreth and Kimble’s “The duality of knowledge” (2002) earlier this morning while writing this and have found it thought-provoking through to lunch.

What’s interesting is that it is (a) 12 years old, (b) a rather straightforward analysis of information technology, expert systems, ‘knowledge management’, etc. in light of solid post-Enlightenment thinking about the nature of knowledge, and (c) an anticipation of the problems of ‘interpretability’ that were a couple months ago at least an active topic of academic discussion. Or so I hear.

This is the paper’s abstract:

Knowledge Management (KM) is a field that has attracted much attention both in academic and practitioner circles. Most KM projects appear to be primarily concerned with knowledge that can be quantified and can be captured, codified and stored – an approach more deserving of the label Information Management.

Recently there has been recognition that some knowledge cannot be quantified and cannot be captured, codified or stored. However, the predominant approach to the management of this knowledge remains to try to convert it to a form that can be handled using the ‘traditional’ approach.

In this paper, we argue that this approach is flawed and some knowledge simply cannot be captured. A method is needed which recognises that knowledge resides in people: not in machines or documents. We will argue that KM is essentially about people and the earlier technology driven approaches, which failed to consider this, were bound to be limited in their success. One possible way forward is offered by Communities of Practice, which provide an environment for people to develop knowledge through interaction with others in an environment where knowledge is created nurtured and sustained.

The authors point out that Knowledge Management (KM) is an extension of the earlier program of Artificiali Intelligence, depends on a model of knowledge that maintains that knowledge can be explicitly represented and hence stored and transfered, and propose an alternative way of thinking about things based on the Communities of Practice framework.

A lot of their analysis is about the failures of “expert systems”, which is a term that has fallen out of use but means basically the same thing as the contemporary uncomputational scholarly use of ‘algorithm’. An expert system was a computer program designed to make decisions about things. Broadly speaking, a search engine is a kind of expert system. What’s changed are the particular techniques and algorithms that such systems employ, and their relationship with computing and sensing hardware.

Here’s what Hildreth and Kimble have to say about expert systems in 2002:

Viewing knowledge as a duality can help to explain the failure of some KM initiatives. When the harder aspects are abstracted in isolation the representation is incomplete: the softer aspects of knowledge must also be taken into account. Hargadon (1998) gives the example of a server holding past projects, but developers do not look there for solutions. As they put it, ‘the important knowledge is all in people’s heads’, that is the solutions on the server only represent the harder aspects of the knowledge. For a complete picture, the softer aspects are also necessary. Similarly, the expert systems of the 1980s can be seen as failing because they concentrated solely on the harder aspects of knowledge. Ignoring the softer aspects meant the picture was incomplete and the system could not be moved from the environment in which it was developed.

However, even knowledge that is ‘in people’s heads’ is not sufficient – the interactive aspect of Cook and Seely Brown’s (1999) ‘knowing’ must also be taken into account. This is one of the key aspects to the management of the softer side to knowledge.

In 2002, this kind of argument was seen as a valuable critique of artificial intelligence and the practices based on it as a paradigm. But already by 2002 this paradigm was falling away. Statistical computing, reinforcement learning, decision tree bagging, etc. were already in use at this time. These methods are “softer” in that they don’t require the “hard” concrete representations of the earlier artificial intelligence program, which I believe by that time was already refered to as “Good Old Fashioned AI” or GOFAI by a number of practicioners.

(I should note–that’s a term I learned while studying AI as an undergraduate in 2005.)

So throughout the 90’s and the 00’s, if not earlier, ‘AI’ transformed into ‘machine learning’ and become the implementation of ‘soft’ forms of knowledge. These systems are built to learn to perform a task optimally based flexibly on feedback from past performance. They are in fact the cybernetic systems imagined by Norbert Wiener.

Perplexing, then, is the contemporary problem that the models created by these machine learning algorithms are opaque to their creators. These models were created using techniques that were designed precisely to solve the problems that systems based on explicit, communicable knowledge were meant to solve.

If you accept the thesis that contemporary ‘algorithms’-driven systems are well-designed implementations of ‘soft’ knowledge systems, then you get some interesting conclusions.

First, forget about interpeting the learned models of these systems and testing them for things like social discrimination, which is apparently in vogue. The right place to focus attention is on the function being optimized. All these feedback-based systems–whether they be based on evolutionary algorithms, or convergence on local maxima, or reinforcement learning, or whatever–are designed to optimize some goal function. That goal function is the closest thing you will get to an explicit representation of the purpose of the algorithm. It may change over time, but it should be coded there explicitly.

Interestingly, this is exactly the sense of ‘purpose’ that Wiener proposed could be applied to physical systems in his landmark essay, published with Rosenbleuth and Bigelow, “Purpose, Behavior, and Teleology.” In 1943. Sly devil.

EDIT: An excellent analysis of how fairness can be represented as an explicit goal function can be found in Dwork et al. 2011.

Second, because what the algorithms is designed to optimize is generally going to be something like ‘maximize ad revenue’ and not anything particularly explicitly pernicious like ‘screw over the disadvantaged people’, this line of inquiry will raise some interesting questions about, for example, the relationship between capitalism and social justice. By “raise some interesting questions”, I mean, “reveal some uncomfortable truths everyone is already aware of”. Once it becomes clear that the whole discussion of “algorithms” and their inscrutability is just a way of talking about societal problems and entrenched political interests without talking about it, it will probably be tabled due to its political infeasibility.

That is (and I guess this is the third point) unless somebody can figure out how to explicitly define the social justice goals of the activists/advocates into a goal function that could be implemented by one of these soft-touch expert systems. That would be rad. Whether anybody would be interested in using or investing in such a system is an important open question. Not a wide open question–the answer is probably “Not really”–but just open enough to let some air onto the embers of my idealism.

objective properties of text and robot scientists

One problem with having objectivity as a scientific goal is that it may be humanly impossible.

One area where this comes up is in the reading of a text. To read is to interpret, and it is impossible to interpret without bringing ones own concepts and experience to bear on the interpretation. This introduces partiality.

This is one reason why Digital Humanities are interesting. In Digital Humanities, one is using only the objective properties of the text–its data as a string of characters and its metadata. Semantic analysis is reduced to a study of a statistical distribution over words.

An odd conclusion: the objective scientific subject won’t be a human intelligence at all. It will need to be a robot. Its concepts may never be interpretable by humans because any individual human is too small-minded or restricted in their point of view to understand the whole.

Looking at the history of cybernetics, artificial intelligence, and machine learning, we can see the progression of a science dedicated to understanding the abstract properties of an idealized, objective learner. That systems such as these underly the infrastructure we depend on for the organization of society is a testament to their success.

It all comes back to Artificial Intelligence

I am blessed with many fascinating conversations every week. Because of the field I am in, these conversations are mainly about technology and people and where they intersect.

Sometimes they are about philosophical themes like how we know anything, or what is ethical. These topics are obviously relevant to an academic researcher, especially when one is interested in computational social science, a kind of science whose ethics have lately been called into question. Other times they are about the theoretical questions that such a science should or could address, like: how do we identify leaders? Or determine what are the ingredients for a thriving community? What is creativity, and how can we mathematically model how it arises from social interaction?

Sometimes the conversations are political. Is it a problem that algorithms are governing more of our political lives and culture? If so, what should we do about it?

The richest and most involved conversations, though, are about artificial intelligence (AI). As a term, it has fallen out of fashion. I was very surprised to see it as a central concept in Bengio et al.’s “Representation Learning: A Review and New Perspectives” [arXiv]. In most discussion scientific computing or ‘data science’ for the most part people have abandoned the idea of intelligent machines. Perhaps this is because so many of the applications of this technology seem so prosaic now. Curating newsfeeds, for example. That can’t be done intelligently. That’s just an algorithm.

Never mind that the origins of all of what we now call machine learning was in the AI research program, which is as old as computer science itself and really has grown up with it. Marvin Minsky famously once defined artificial intelligence as ‘whatever humans still do better than computers.’ And this is the curse of the field. With every technological advance that is at the time mind-blowingly powerful, performing a task that it used to require hundreds of people to perform, it very shortly becomes mere technology.

It’s appropriate then that representation learning, the problem of deriving and selecting features from a complex data set that are valuable for other kinds of statistical analysis in other tasks, is brought up in the context of AI. Because this is precisely the sort of thing that people still think they are comparatively good at. A couple years ago, everyone was talking about the phenomenon of crowdsourced image tagging. People are better at seeing and recognizing objects in images than computers, so in order to, say, provide the data for Google’s Image search, you still need to mobilize lots of people. You just have to organize them as if they were computer functions so that you can properly aggregate their results.

On of the earliest tasks posed to AI, the Turing Test, proposed and named after Alan Turing, the inventor of the fricking computer, is the task of engaging in conversation as if one is a human. This is harder than chess. It is harder than reading handwriting. Something about human communication is so subtle that it has withstood the test of time as an unsolved problem.

Until June of this year, when a program passed the Turing Test in the annual competition. Conversation is no longer something intelligent. It can be performed by a mere algorithm. Indeed, I have heard that a lot of call centers now use scripted dialog. An operator pushes buttons guiding the caller through a conversation that has already been written for them.

So what’s next?

I have a proposal: software engineering. We still don’t have an AI that can write its own source code.

How could we create such an AI? We could use machine learning, training it on data. What’s amazing is that we have vast amounts of data available on what it is like to be a functioning member of a software development team. Open source software communities have provided an enormous corpus of what we can guess is some of the most complex and interesting data ever created. Among other things, this software includes source code for all kinds of other algorithms that were once considered AI.

One reason why I am building BigBang, a toolkit for the scientific analysis of software communities, is because I believe it’s the first step to a better understanding of this very complex and still intelligent process.

While above I have framed AI pessimistically–as what we delegate away from people to machines, that is unnecessarily grim. In fact, with every advance in AI we have come to a better understanding of our world and how we see, hear, think, and do things. The task of trying to scientifically understand how we create together and the task of developing an AI to create with us is in many ways the same task. It’s just a matter of how you look at it.