Digifesto

social structure and the private sector

The Human Cell

Academic social scientists leaning towards the public intellectual end of the spectrum love to talk about social norms.

This is perhaps motivated by the fact that these intellectual figures are prominent in the public sphere. The public sphere is where these norms are supposed to solidify, and these intellectuals would like to emphasize their own importance.

I don’t exclude myself from this category of persons. A lot of my work has been about social norms and technology design (Benthall, 2014; Benthall, Gürses and Nissenbaum, 2017)

But I also work in the private sector, and it’s striking how differently things look from that perspective. It’s natural for academics who participate more in the public sphere than the private sector to be biased in their view of social structure. From the perspective of being able to accurately understand what’s going on, you have to think about both at once.

That’s challenging for a lot of reasons, one of which is that the private sector is a lot less transparent than the public sphere. In general the internals of actors in the private sector are not open to the scrutiny of commentariat onlookers. Information is one of the many resources traded in pairwise interactions; when it is divulged, it is divulged strategically, introducing bias. So it’s hard to get a general picture of the private sector, even though accounts for a much larger proportion of the social structure that’s available than the public sphere. In other words, public spheres are highly over-represented in analysis of social structure due to the available of public data about them. That is worrisome from an analytic perspective.

It’s well worth making the point that the public/private dichotomy is problematic. Contextual integrity theory (Nissenbaum, 2009) argues that modern society is differentiated among many distinct spheres, each bound by its own social norms. Nissenbaum actually has a quite different notion of norm formation from, say, Habermas. For Nissenbaum, norms evolve over social history, but may be implicit. Contrast this with Habermas’s view that norms are the result of communicative rationality, which is an explicit and linguistically mediated process. The public sphere is a big deal for Habermas. Nissenbaum, a scholar of privacy, reject’s the idea of the ‘public sphere’ simpliciter. Rather, social spheres self-regulate and privacy, which she defines as appropriate information flow, is maintained when information flows according to these multiple self-regulatory regimes.

I believe Nissenbaum is correct on this point of societal differentiation and norm formation. This nuanced understanding of privacy as the differentiated management of information flow challenges any simplistic notion of the public sphere. Does it challenge a simplistic notion of the private sector?

Naturally, the private sector doesn’t exist in a vacuum. In the modern economy, companies are accountable to the law, especially contract law. They have to pay their taxes. They have to deal with public relations and are regulated as to how they manage information flows internally. Employees can sue their employers, etc. So just as the ‘public sphere’ doesn’t permit a total free-for-all of information flow (some kinds of information flow in public are against social norms!), so too does the ‘private sector’ not involve complete secrecy from the public.

As a hypothesis, we can posit that what makes the private sector different is that the relevant social structures are less open in their relations with each other than they are in the public sphere. We can imagine an autonomous social entity like a biological cell. Internally it may have a lot of interesting structure and organelles. Its membrane prevents this complexity leaking out into the aether, or plasma, or whatever it is that human cells float around in. Indeed, this membrane is necessary for the proper functioning of the organelles, which in turn allows the cell to interact properly with other cells to form a larger organism. Echoes of Francisco Varela.

It’s interesting that this may actually be a quantifiable difference. One way of modeling the difference between the internal and external-facing complexity of an entity is using information theory. The more complex internal state of the entity has higher entropy than the membrane. The fact that the membrane causally mediates interactions between the internals and the environment prevents information flow between them; this is captured by the Data Processing Inequality. The lack of information flow between the system internals and externals is quantified as lower mutual information between the two domains. At zero mutual information, the two domains are statistically independent of each other.

I haven’t worked out all the implications of this.

References

Benthall, Sebastian. (2015) Designing Networked Publics for Communicative Action. Jenny Davis & Nathan Jurgenson (eds.) Theorizing the Web 2014 [Special Issue]. Interface 1.1. (link)

Sebastian Benthall, Seda Gürses and Helen Nissenbaum (2017), “Contextual Integrity through the Lens of Computer Science”, Foundations and Trends® in Privacy and Security: Vol. 2: No. 1, pp 1-69. http://dx.doi.org/10.1561/3300000016

Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

Advertisements

on university businesses

Suppose we wanted to know why there’s an “epistemic crisis” today. Suppose we wanted to talk about higher education’s role and responsibility towards that crisis, even though that may be just a small part of it.

That’s a reason why we should care about postmodernism in universities. The alternative, some people have argued, is a ‘modernist’ or even ‘traditional’ university which was based on a perhaps simpler and less flexible theory of knowledge. For the purpose of this post I’m going to assume the reader knows roughly what that’s all about. Since postmodernism rejects meta-narratives and instead admits that all we have to legitimize anything is a contest of narratives, that is really just asking for an epistemic crisis where people just use whatever narratives are most convenient for them and then society collapses.

In my last post I argued that the question of whether universities should be structured around modernist or postmodernist theories of legitimation and knowledge has been made moot by the fact that universities have the option of operating solely on administrative business logic. I wasn’t being entirely serious, but it’s a point that’s worth exploring.

One reason why it’s not so terrible if universities operate according to business logic is because it may still, simply as a function of business logic, be in their strategic interest to hire serious scientists and scholars whose work is not directly driven by business logic. These scholars will be professionally motivated and in part directed by the demands of their scholarly fields. But that kicks the can of the inquiry down the road.

Suppose that there are some fields that are Bourdieusian sciences, which might be summarized as an artistic field structured by the distribution of symbolic capital to those who win points in the game of arbitration of the real. (Writing that all out now I can see why many people might find Bourdieu a little opaque.)

Then if a university business thinks it should hire from the Bourdieusian sciences, that’s great. But there’s many other kinds of social fields it might be useful to hire from for, e.g, faculty positions. This seems to agree with the facts: many university faculty are not from Bourdieusian sciences!

This complicates, a lot actually, the story about the relationship between universities and knowledge. One thing that is striking from the ethnography of education literature (Jean Lave) is how much the social environment of learning is constitutive of what learning is (to put it one way). Society expects and to some extent enforces that when a student is in a classroom, what they are taught is knowledge. We have concluded that not every teacher in a university business is a Bourdieusian scientist, hence some of what students learn in universities is not Bourdieusian science, so it must be that a lot of what students are taught in universities: isn’t real. But what is it then? It’s got to be knowledge!

The answer may be: it’s something useful. It may not be real or even approximating what’s real (by scientific standards), but it may still be something that’s useful to believe, express, or perform. If it’s useful to “know” even in this pragmatic and distinctly non-Platonic sense of the term, there’s probably a price at which people are willing to be taught it.

As a higher order effect, universities might engage in advertising in such a way that some prospective students are convinced that what they teach is useful to know even when it’s not really useful at all. This prospect is almost too cynical to even consider. But that’s why it’s important to consider why a university operating solely according to business logic would in fact be terrible! This would not just be the sophists teaching sophistry to students so that they can win in court. It would be sophists teaching bullshit to students because they can get away with being paid for it. In other words, charlatans.

Wow. You know I didn’t know where this was going to go when I started reasoning about this, but it’s starting to sound worse and worse!

It can’t possibly be that bad. University businesses have a reputation to protect, and they are subject to the court of public opinion. Even if not all fields are Bourdieusian science, each scholarly field has its own reputation to protect and so has an incentive to ensure that it, at least, is useful for something. It becomes, in a sense, a web of trust, where each link in the network is tested over time. As an aside, this is an argument for the importance of interdisciplinary work. It’s not just a nice-to-have because wouldn’t-it-be-interesting. It’s necessary as a check on the mutual compatibility of different fields. Prevents disciplines from becoming exploitative of students and other resources in society.

Indeed, it’s possible that this process of establishing mutual trust among experts even across different fields is what allows a kind of coherentist, pragmatist truth to emerge. But that’s by no means guaranteed. But to be very clear, that process can happen among people whether or not they are involved in universities or higher education. Everybody is responsible for reality, in a sense. To wit, citizen science is still Bourdieusian science.

But see how the stature of the university has fallen. Under a modernist logic, the university was where one went to learn what is real. One would trust that learning it would be useful because universities were dedicated to teaching what was real. Under business logic, the university is a place to learn something that the university finds it useful to teach you. It cannot be trusted without lots of checked from the rest of the society. Intellectual authority is now much more distributed.

The problem with the business university is that it finds itself in competition for intellectual authority, and hence society’s investment in education, with other kinds of institutions. These include employers, who can discount wages for jobs that give their workers valuable human capital (e.g. the free college internship). Moreover, absent its special dedication to science per se, there’s less of a reason to put society’s investment to basic research in its hands. This accords with Clark Kerr‘s observation that the postwar era was golden for universities because the federal government kept them flush with funds for basic research, but these started to trickle down and now a lot more important basic research is done in the private sector.

So to the extent that the university is responsible for the ‘epistemic crisis’, it may be because universities began to adopt business logic as their guiding principle. This is not because they then began to teach garbage. It’s because they lost the special authority awarded to modernist universities, which we funded for a special mission in society. This opened the door for more charlatans, most of whom are not at universities. They might be on YouTube.

Note that this gets us back to something similar but not identical to postmodernism.* What’s at stake are not just narratives, but also practices and other forms of symbolic and social capital. But there’s certainly many different ones, articulated differently, and in competition with each other. The university business winds up reflecting the many different kinds of useful knowledge across all society and reproducing it through teaching. Society at large can then keep universities in check.
This “society keeping university businesses in check” point is a case for abolishing tenure in university businesses. Tenure may be a great idea in universities with different purposes and incentive structures. But for university businesses, it’s not good–it makes them less good businesses.

The epistemic crisis is due to a crisis in epistemic authority. To the extent universities are responsible, it’s because universities lost their special authority. This may be because they abandoned the modernist model of the university. But is not because they abandoned modernism to postmodernism. “Postmodern” and “modern” fields coexist symbiotically with the pragmatist model of the university as business. But losing modernism has been bad for the university business as a brand.

* Though it must be noted that Lyotard’s analysis of the postmodern condition is all about how legitimation by performativity is the cause of this new condition. I’m probably just recapitulating his points in this post.

STEM and (post-)modernism

There is an active debate in the academic social sciences about modernism and postmodernism. I’ll refer to my notes on Clark Kerr’s comments on the postmodern university as an example of where this topic comes up.

If postmodernism is the condition where society is no longer bound by a single unified narrative but rather is constituted by a lot of conflicting narratives, then, yeah, ok, we live in a postmodern society. This isn’t what the debate is really about though.

The debate is about whether we (anybody in intellectual authority) should teach people that we live in a postmodern society and how to act effectively in that world, or if we should teach people to believe in a metanarrative which allows for truth, progress, and so on.

It’s important to notice that this whole question of what narratives we do or do not teach our students is irrelevant to a lot of educational fields. STEM fields aren’t really about narratives. They are about skills or concepts or something.

Let me put it another way. Clark Kerr was concerned about the rise of the postmodern university–was the traditional, modernist university on its way out?

The answer, truthfully, was that neither the traditional modernist university nor the postmodern university became dominant. Probably the most dominant university in the United States today is Stanford; it has accomplished this through a winning combination of STEM education, proximity to venture capital, and private fundraising. You don’t need a metanarrative if you’re rich.

Maybe that indicates where education has to go. The traditional university believed that philosophy was at its center. Philosophy is no longer at the center of the university. Is there a center? If there isn’t, then postmodernism reigns. But something else seems to be happening: STEM is becoming the new center, because it’s the best funded of the disciplines. Maybe that’s fine! Maybe focusing on STEM is how to get modernism back.

The social value of an actually existing alternative — BLOCKCHAIN BLOCKCHAIN BLOCKCHAIN

When people get excited about something, they will often talk about it in hyberbolic terms. Some people will actually believe what they say, though this seems to drop off with age. The emotionally energetic framing of the point can be both factually wrong and contain a kernel of truth.

This general truth applies to hype about particular technologies. Does it apply to blockchain technologies and cryptocurrencies? Sure it does!

Blockchain boosters have offered utopian or radical visions about what this technology can achieve. We should be skeptical about these visions prima facie precisely in proportion to how utopian and radical they are. But that doesn’t mean that this technology isn’t accomplishing anything new or interesting.

Here is a summary of some dialectics around blockchain technology:

A: “Blockchains allow for fully decentralized, distributed, and anonymous applications. These can operate outside of the control of the law, and that’s exciting because it’s a new frontier of options!”

B1: “Blockchain technology isn’t really decentralized, distributed, or anonymous. It’s centralizing its own power into the hands of the few, and meanwhile traditional institutions have the power to crush it. Their anarchist mentality is naive and short-sighted.”

B2: “Blockchain technology enthusiasts will soon discover that they actually want all the legal institutions they designed their systems to escape. Their anarchist mentality is naive and short-sighted.”

While B1 and B2 are both critical of blockchain technology and see A as naive, it’s important to realize that they believe A is naive for contradictory reasons. B1 is arguing that it does not accomplish what it was purportedly designed to do, which is provide a foundation of distributed, autonomous systems that’s free from internal and external tyranny. B2 is arguing that nobody actually wants to be free of these kinds of tyrannies.

These are conservative attitudes that we would except from conservative (in the sense of conservation, or “inhibiting change”) voices in society. These are probably demographically different people from person A. And this makes all the difference.

If what differentiates people is their relationship to different kinds of social institutions or capital (in the Bourdieusian sense), then it would be natural for some people to be incumbents in old institutions who would argue for their preservation and others to be willing to “exit” older institutions and join new ones. However imperfect the affordances of blockchain technology may be, they are different affordances than those of other technologies, and so they promise the possibility of new kinds of institutions with an alternative information and communications substrate.

It may well be that the pioneers in the new substrate will find that they have political problems of their own and need to reinvent some of the societal controls that they were escaping. But the difference will be that in the old system, the pioneers were relative outsiders, whereas in the new system, they will be incumbents.

The social value of blockchain technology therefore comes in two waves. The first wave is the value it provides to early adopters who use it instead of other institutions that were failing them. These people have made the choice to invest in something new because the old options were not good enough for them. We can celebrate their successes as people who have invented quite literally a new form of social capital, quite possibly literally a new form of wealth. When a small group of people create a lot of new wealth this almost immediately creates a lot of resentment from others who did not get in on it.

But there’s a secondary social value to the creation of actually existing alternative institutions and forms of capital (which are in a sense the same thing). This is the value of competition. The marginal person, who can choose how to invest themselves, can exit from one failing institution to a fresh new one if they believe it’s worth the risk. When an alternative increases the amount of exit potential in society, that increases the competitive pressure on institutions to perform. That should benefit even those with low mobility.

So, in conclusion, blockchain technology is good because it increases institutional competition. At the end of the day that reduces the power of entrenched incumbents to collect rents and gives everybody else more flexibility.

The economy of responsibility and credit in ethical AI; also, shameless self-promotion

Serious discussions about ethics and AI can be difficult because at best most people are trained in either ethics or AI, but not both. This leads to lots of confusion as a lot of the debate winds up being about who should take responsibility and credit for making the hard decisions.

Here are some of a flavors of outcomes of AI ethics discussions. Without even getting into the specifics of the content, each position serves a different constituency, despite all coming under the heading of “AI Ethics”.

  • Technical practicioners getting together to decide a set of professional standards by which to self-regulate their use of AI.
  • Ethicists getting together to decide a set of professional standards by which to regulate the practices of technical people building AI.
  • Computer scientists getting together to come up with a set of technical standards to be used in the implementation of autonomous AI so that the latter performs ethically.
  • Ethicists getting together to come up with ethical positions with which to critique the implementations of AI.

Let’s pretend for a moment that the categories used here of “computer scientists” and “ethicists” are valid ones. I’m channeling the zeitgeist here. The core motivation of “ethics in AI” is the concern that the AI that gets made will be bad or unethical for some reason. This is rumored to be because there are people who know how to create AI–the technical practicioners–who are not thinking through the ethical consequences of their work. There are supposed to be some people who are authorities on what outcomes are good and bad; I’m calling these ‘ethicists’, though I include sociologists of science and lawyers claiming an ethical authority in that term.

What are the dimensions along which these positions vary?

What is the object of the prescription? Are technical professionals having their behavior prescribed? Or is it the specification of the machine that’s being prescribed?

Who is creating the prescription? Is it “technical people” like programmers and computer scientists, or is it people ‘trained in ethics’ like lawyers and sociologists?

When is the judgment being made? Is the judgment being made before the AI system is being created as part of its production process, or is it happening after the fact when it goes live?

These dimensions are not independent from each other and in fact it’s their dependence on each other that makes the problem of AI ethics politically challenging. In general, people would like to pass on responsibility to others and take credit for themselves. Technicians love to pass responsibility to their machines–“the algorithm did it!”. Ethicists love to pass responsibility to technicians. In one view of the ideal world, ethicists would come up with a set of prescriptions, technologists would follow them, and nobody would have any ethical problems with the implementations of AI.

This would entail, more or less, that ethical requirements have been internalized into either technical design processes, engineering principles, or even mathematical specifications. This would probably be great for society as a whole. But the more ethical principles get translated into something that’s useful for engineers, the less ethicists can take credit for good technical outcomes. Some technical person has gotten into the loop and solved the problem. They get the credit, except that they are largely anonymous, and so the product, the AI system, gets the credit for being a reliable, trustworthy product. The more AI products are reliable, trustworthy, good, the less credible are the concerns of the ethicists, whose whole raison d’etre is to prevent the uninformed technologists from doing bad things.

The temptation for ethicists, then, is to sit safely where they can critique after the fact. Ethicists can write for the public condemning evil technologists without ever getting their hands dirty with the problems of implementation. There’s an audience for this and it’s a stable strategy for ethicists, but it’s not very good for society. It winds up putting public pressure on technologists to solve the problem themselves through professional self-regulation or technical specification. If they succeed, then the ethicists don’t have anything to critique, and so it is in the interest of ethicists to cast doubt on these self-regulation efforts without ever contributing to their success. Ethicists have the tricky job of pointing out that technologists are not listening to ethicists, and are therefore suspect, without ever engaging with technologists in such a way that would allow them to arrive at a bona fide ethical technical solution. This is, one must admit, not a very ethical thing to do.

There are exceptions to this bleak and cynical picture!

In fact, yours truly is an exception to this bleak and cynical picture, along with my brilliant co-authors Seda Gürses and Helen Nissenbaum! If you would like to see an honest attempt at translating ethics into computer science so that AI can be more ethical, look no further than:

Sebastian Benthall, Seda Gürses and Helen Nissenbaum (2017), “Contextual Integrity through the Lens of Computer Science”, Foundations and Trends® in Privacy and Security: Vol. 2: No. 1, pp 1-69. http://dx.doi.org/10.1561/3300000016

Contextual Integrity is an ethical framework. I’d go so far as to say that it’s a meta-ethical framework, as it provides a theory of where ethics comes from an why they are important. It’s a theory that’s developed by the esteemed ethicist and friend-of-computer-science Helen Nissenbaum.

In this paper, which you should definitely read, two researchers team up with Helen Nissenbaum to review all the computer science papers we can find that reference Contextual Integrity. One of those researchers is Seda Gürses, a computer scientist with deep background in privacy and security engineering. You essentially can’t find two researchers more credible than Helen and Seda, paired up, on the topic of how to engineer privacy (which is a subset of ethics).

I am also a co-author of this paper. You can certainly find more credible researchers on this topic than myself, but I have the enormous good fortune to have worked with such profoundly wise and respectable collaborators.

Probably the best part about this paper, in my view, is that we’ve managed to write a paper about ethics and computer science (and indeed, AI is a subset of what we are talking about in the paper) which is honestly trying to grapple with the technical challenges of designing ethical systems, while also contending with all the sociological complication of what ethics is. There’s a while section where we refuse to let computer scientists off the hook from dealing with how norms (and therefore ethics) is the result of a situated and historical process of social adaptation. But then there’s a whole other section where we talk about how developing AI that copes responsibly with the situated and historical process of social adaptation is an open research problem in privacy engineering! There’s truly something for everybody!

Exit vs. Voice as Defecting vs. Cooperation as …

These dichotomies that are often thought of separately are actually the same.

Cooperation Defection
Voice (Hirschman) Exit (Hirschman)
Lifeworld (Habermas) System (Habermas)
Power (Arendt) Violence (Arendt)
Institutions Markets

Why I will blog more about math in 2018

One reason to study and write about political theory is what Habermas calls the emancipatory interest of human inquiry: to come to better understand the social world one lives in, unclouded by ideology, in order to be more free from those ideological expectations.

This is perhaps counterintuitive since what is perhaps most seductive about political theory is that it is the articulation of so many ideologies. Indeed, one can turn to political theory because one is looking for an ideology that suits them. Having a secure world view is comforting and can provide a sense of purpose. I know that personally I’ve struggled with one after another.

Looking back on my philosophical ‘work’ over the decade years (as opposed to my technical and scientific work) I’d like to declare it an emancipatory success for at least one person, myself. I am happier for it, though at the cost that comes from learning the hard way.

A problem with this blog is that it is too esoteric. It has not been written with a particular academic discipline in mind. It draws rather too heavily from certain big name thinkers that not enough people have read. I don’t provide background material in these thinkers, and so many find this inaccessible.

One day I may try to edit this material into a more accessible version of its arguments. I’m not sure who would find this useful, because much of what I’ve been doing in this work is arriving at the conclusion that actually, truly, mathematical science is the finest way of going about understanding sociotechnical systems. I believe this follows even from deep philosophical engagement with notable critics of this view–and I have truly tried to engage with the best and most notable of these critics. There will always be more of them, but I think at this point I have to make a decision to not seek them out any more. I have tested these views enough to build on them as a secure foundation.

What follows then is a harder but I think more rewarding task of building out the mathematical theory that reflects my philosophical conclusions. This is necessary for, for example, building a technical implementation that expresses the political values that I’ve arrived at. Arguably, until I do this, I’ll have just been beating around the bush.

I will admit to being sheepish about blogging on technical and mathematical topics. This is because in my understanding technical and mathematical writing is held to a higher standard that normal writing. Errors are more clear, and more permanent.

I recognize this now as a personal inhibition and a destructive one. If this blog has been valuable to me as a tool for reading, writing, and developing fluency in obscure philosophical literature, why shouldn’t it also be a tool for reading, writing, and developing fluency in obscure mathematical and technical literature? And to do the latter, shouldn’t I have to take the risk of writing with the same courage, if not abandon?

This is my wish for 2018: to blog more math. It’s a riskier project, but I think I have to in order to keep developing these ideas.

technological determinism and economic determinism

If you are trying to explain society, politics, the history of the world, whatever, it’s a good idea to narrow the scope of what you are talking about to just the most important parts because there is literally only so much you could ever possibly say. Life is short. A principled way of choosing what to focus on is to discuss only those parts that are most significant in the sense that they played the most causally determinative role in the events in question. By widely accepted interventionist theories of causation, what makes something causally determinative of something else is the fact that in a counterfactual world in which the cause was made to be somehow different, the effect would have been different as well.

Since we basically never observe a counterfactual history, this leaves a wide open debate over the general theoretical principles one would use to predict the significance of certain phenomena over others.

One point of view on this is called technological determinism. It is the view that, for a given social phenomenon, what’s really most determinative of it is the technological substrate of it. Engineers-turned-thought-leaders love technological determinism because of course it implies that really the engineers shape society, because they are creating the technology.

Technological determinism is absolutely despised by academic social scientists who have to deal with technology and its role in society. I have a hard time understanding why. Sometimes it is framed as an objection to technologist who are avoiding responsibility for social problems they create because it’s the technology that did it, not them. But such a childish tactic really doesn’t seem to be what’s at stake if you’re critiquing technological determinism. Another way of framing the problem is the say that the way a technology affects society in San Francisco is going to be different from how it affects society in Beijing. Society has its role in a a dialectic.

So there is a grand debate of “politics” versus “technology” which reoccurs everywhere. This debate is rather one sided, since it is almost entirely constituted by political scientists or sociologists complaining that the engineers aren’t paying enough attention to politics, seeing how their work has political causes and effects. Meanwhile, engineers-turned-thought-leaders just keep spouting off whatever nonsense comes to their head and they do just fine because, unlike the social scientist critics, engineers-turned-thought-leaders tend to be rich. That’s why they are thought leaders: because their company was wildly successful.

What I find interesting is that economic determinism is never part of this conversation. It seems patently obvious that economics drives both politics and technology. You can be anywhere on the political spectrum and hold this view. Once it was called “dialectical materialism”, and it was the foundation for left-wing politics for generations.

So what has happened? Here are a few possible explanations.

The first explanation is that if you’re an economic determinist, maybe you are smart enough to do something more productive with your time than get into debates about whether technology or politics is more important. You would be doing something more productive, like starting a business to develop a technology that manipulates political opinion to favor the deregulation of your business. Or trying to get a socialist elected so the government will pay off student debts.

A second explanation is… actually, that’s it. That’s the only reason I can think of. Maybe there’s another one?

The Data Processing Inequality and bounded rationality

I have long harbored the hunch that information theory, in the classic Shannon sense, and social theory are deeply linked. It has proven to be very difficult to find an audience for this point of view or an opportunity to work on it seriously. Shannon’s information theory is widely respected in engineering disciplines; many social theorists who are unfamiliar with it are loathe to admit that something from engineering should carry essential insights for their own field. Meanwhile, engineers are rarely interested in modeling social systems.

I’ve recently discovered an opportunity to work on this problem through my dissertation work, which is about privacy engineering. Privacy is a subtle social concept but also one that has been rigorously formalized. I’m working on formal privacy theory now and have been reminded of a theorem from information theory: the Data Processing Theorem. What strikes me about this theorem is that is captures an point that comes up again and again in social and political problems, though it’s a point that’s almost never addressed head on.

The Data Processing Inequality (DPI) states that for three random variables, X, Y, and Z, arranged in Markov Chain such that X \rightarrow Y \rightarrow Z, then I(X,Z) \leq I(X,Y), where here I stands for mutual information. Mutual information is a measure of how much two random variables carry information about each other. If $I(X,Y) = 0$, that means the variables are independent. $I(X,Y) \geq 0$ always–that’s just a mathematical fact about how it’s defined.

The implications of this for psychology, social theory, and artificial intelligence are I think rather profound. It provides a way of thinking about bounded rationality in a simple and generalizable way–something I’ve been struggling to figure out for a long time.

Suppose that there’s a big world out the, W and there’s am organism, or a person, or a sociotechnical organization within it, Y. The world is big and complex, which implies that it has a lot of informational entropy, H(W). Through whatever sensory apparatus is available to Y, it acquires some kind of internal sensory state. Because this organism is much small than the world, its entropy is much lower. There are many fewer possible states that the organism can be in, relative to the number of states of the world. H(W) >> H(Y). This in turn bounds the mutual information between the organism and the world: I(W,Y) \leq H(Y)

Now let’s suppose the actions that the organism takes, Z depend only on its internal state. It is an agent, reacting to its environment. Well whatever these actions are, they can only be so calibrated to the world as the agent had capacity to absorb the world’s information. I.e., I(W,Z) \leq H(Y) << H(W). The implication is that the more limited the mental capacity of the organism, the more its actions will be approximately independent of the state of the world that precedes it.

There are a lot of interesting implications of this for social theory. Here are a few cases that come to mind.

I've written quite a bit here (blog links) and here (arXiv) about Bostrom’s superintelligence argument and why I’m generally not concerned with the prospect of an artificial intelligence taking over the world. My argument is that there are limits to how much an algorithm can improve itself, and these limits put a stop to exponential intelligence explosions. I’ve been criticized on the grounds that I don’t specify what the limits are, and that if the limits are high enough then maybe relative superintelligence is possible. The Data Processing Inequality gives us another tool for estimating the bounds of an intelligence based on the range of physical states it can possibly be in. How calibrated can a hegemonic agent be to the complexity of the world? It depends on the capacity of that agent to absorb information about the world; that can be measured in information entropy.

A related case is a rendering of Scott’s Seeing Like a State arguments. Why is it that “high modernist” governments failed to successfully control society through scientific intervention? One reason is that the complexity of the system they were trying to manage vastly outsized the complexity of the centralized control mechanisms. Centralized control was very blunt, causing many social problems. Arguably, behavioral targeting and big data centers today equip controlling organizations with more informational capacity (more entropy), but they
still get it wrong sometimes, causing privacy violations, because they can’t model the entirety of the messy world we’re in.

The Data Processing Inequality is also helpful for explaining why the world is so messy. There are a lot of different agents in the world, and each one only has so much bandwidth for taking in information. This means that most agents are acting almost independently from each other. The guiding principle of society isn’t signal, it’s noise. That explains why there are so many disorganized heavy tail distributions in social phenomena.

Importantly, if we let the world at any time slice be informed by the actions of many agents acting nearly independently from each other in the slice before, then that increases the entropy of the world. This increases the challenge for any particular agent to develop an effective controlling strategy. For this reason, we would expect the world to get more out of control the more intelligence agents are on average. The popularity of the personal computer perhaps introduced a lot more entropy into the world, distributed in an agent-by-agent way. Moreover, powerful controlling data centers may increase the world’s entropy, rather than redtucing it. So even if, for example, Amazon were to try to take over the world, the existence of Baidu would be a major obstacle to its plans.

There are a lot of assumptions built into these informal arguments and I’m not wedded to any of them. But my point here is that information theory provides useful tools for thinking about agents in a complex world. There’s potential for using it for modeling sociotechnical systems and their limitations.

Net neutrality

What do I think of net neutrality?

I think it’s bad for my personal self-interest. I am, economically, a part of the newer tech economy of software and data. I believe this economy benefits from net neutrality. I also am somebody who loves The Web as a consumer. I’ve grown up with it. It’s shaped my values.

From a broader perspective, I think ending net neutrality will revitalize U.S. telecom and give it leverage over the ‘tech giants’–Google, Facebook, Apple, Amazon—that have been rewarded by net neutrality policies. Telecom is a platform, but it had been turned into a utility platform. Now it can be a full-featured market player. This gives it an opportunity for platform envelopment, moving into the markets of other companies and bundling them in with ISP services.

Since this will introduce competition into the market and other players are very well-established, this could actually be good for consumers because it breaks up an oligopoly in the services that are most user-facing. On the other hand, since ISPs are monopolists in most places, we could also expect Internet-based service experience quality to deteriorate in general.

What this might encourage is a proliferation of alternatives to cable ISPs, which would be interesting. Ending net neutrality creates a much larger design space in products that provision network access. Mobile companies are in this space already. So we could see this regulation as a move in favor of the cell phone companies, not just the ISPs. This too could draw surplus away the big four.

This probably means the end of “The Web”. But we’d already seen the end of “The Web” with the proliferation of apps as a replacement for Internet browsing. IoT provides yet another alternative to “The Web”. I loved the Web as a free, creative place where everyone could make their own website about their cat. It had a great moment. But it’s safe to say that it isn’t what it used to be. In fifteen years it may be that most people no longer visit web sites. They just use connected devices and apps. Ending net neutrality means that the connectivity necessary for these services can be bundled in with the service itself. In the long run, that should be good for consumers and even the possibility of market entry for new firms.

In the long run, I’m not sure “The Web” is that important. Maybe it was a beautiful disruptive moment that will never happen again. Or maybe, if there were many more kinds of alternatives, “The Web” would return to being the quirky, radically free and interesting thing it was before it got so mainstream. Remember when The Web was just The Well (which is still around), and only people who were really curious about it bothered to use it? I don’t, because that was well before my time. But it’s possible that the Internet in its browse-happy form will become something like that again.

I hadn’t really thought about net neutrality very much before, to be honest. Maybe there are some good rebuttals to this argument. I’d love to hear them! But for now, I think I’m willing to give the shuttering of net neutrality a shot.