Digifesto

Tag: the black box society

what if computers don’t actually control anything important?

I’ve written a lot (here, informally) on the subject of computational control of society. I’m not the only one, of course. There has in the past few years been a growing fear that one day artificial intelligence might control everything. I’ve argued that this is akin to older fears that, under capitalism, instrumentality would run amok.

Recently, thinking a little more seriously about what’s implied by an economy of control, I’ve been coming around to a quite different conclusion. What if the general tendency of these algorithmic systems is not the enslavement of humanity but rather the opening up of freedom and opportunity? This is not a critical attitude and might be seen as a simple shilling for industrial powers, so let me pose the point slightly more controversially. What if the result of these systems is to provide so much freedom and opportunity that it undermines the structure that makes social action significant? The “control” of these systems could just be the result of our being exposed, at last, to our individual insignificance in the face of each other.

As a foil, I’ll refer again to Frank Pasquale’s The Black Box Society, which I’ve begun to read again at the prompting of Pasquale himself. It is a rare and wonderful thing for the author of a book you’ve written rude things about to write you and tell you you’ve misrepresented the work. So often I assume nobody’s actually reading what I write, making this a lonely vocation indeed. Now I know that at least somebody gives a damn.

In Chapter 3, Pasquale writes:

“The power to include, exclude, and rank [in search results] is the power to ensure which public impressions become permanent and which remain fleeting. That is why search services, social and not, are ‘must-have’ properties for advertisers as well as users. As such, they have made very deep inroads indeed into the sphere of cultural, economic, and political influence that was once dominated by broadcast networks, radio stations, and newspapers. But their dominance is so complete, and their technology so complex, that they have escaped pressures for transparency and accountability that kept traditional media answerable to the public.”

As a continuation of the “technics-out-of-control” meme, there’s an intuitive thrust to this argument. But looking at the literal meaning of the sentences, none of it is actually true!

Let’s look at some of the reasons why these claims are false:

  • There are multiple competing search engines, and switching costs are very low. There are Google and Bing and Duck Duck Go, but there’s also more specialized search engines for particular kinds of things. Literally every branded shopping website has a search engine that includes only what it chooses to include. This market pressure for search drives search engines generally to provide people with the answers they are looking for.
  • While there is a certain amount of curation that goes into search results, the famous early ranking logic which made large scale search possible used mainly data created as part of the content itself (hyperlinks in the case of Google’s PageRank) or usage (engagement in the case of Facebook’s EdgeRank). To the extent that these algorithms have changed, much of it has been because they have had to cave to public pressure, in the form of market pressure. Many of these changes are based on dynamic socially created data as well (such as spam flagging). Far from being manipulated by a secret powerful force, search engine results are always a dynamic, social accomplishment that is a reflection of the public.
  • Alternative media forms, such as broadcast radio, print journalism, cable television, storefront advertisting, and so on still exist and have an influence over people’s decisions. No single digital technology ensures anything! A new restaurant that opens up in a neighborhood is free to gain a local reputation in the old fashioned way. And then these same systems for ranking and search incentivize the discovery over these local gems by design. The information economy doesn’t waste opportunities like this!

So what’s the problem? If algorithms aren’t controlling society, but rather are facilitating its self-awareness, maybe these kinds of polemics are just way off base.

economy of control

We call it a “crisis” when the predictions of our trusted elites are violated in one way or another. We expect, for good reason, things to more or less continue as they are. They’ve evolved to be this way, haven’t they? The older the institution, the more robust to change it must be.

I’ve gotten comfortable in my short life with the global institutions that appeared to be the apex of societal organization. Under these conditions, I found James Beniger‘s work to be particularly appealing, as it predicts the growth of information processing apparati (some combination of information worker and information technology) as formerly independent components of society integrate. I’m of the class of people that benefits from this kind of centralization of control, so I was happy to believe that this was an inevitable outcome according to physical law.

Now I’m not so sure.

I am not sure I’ve really changed my mind fundamentally. This extreme Beniger view is too much like Nick Bostrom’s superintelligence argument in form, and I’ve already thought hard about why that argument is not good. That reasoning stopped at the point of noting how superintelligence “takeoff” is limited by data collection. But I did not go to the next and probably more important step, which is the problem of aleatoric uncertainty in a world with multiple agents. We’re far more likely to get into a situation with multi-polar large intelligences that are themselves fraught with principle-agent problems, because that’s actually the status quo.

I’ve been prodded to revisit The Black Box Society, which I’ve dealt with inadequately. Its beefier chapters deal with a lot of the specific economic and regulatory recent history of the information economy of the United States, which is a good complement to Beniger and a good resource for the study of competing intelligences within a single economy, though I find this data a but clouded by the polemical writing.

“Economy” is the key word here. Pure, Arendtian politics and technics have not blended easily, but what they’ve turned into is a self-regulatory system with structure and agency. More than that, the structure is for sale, and so is the agency. What is interesting about the information economy is, and I guess I’m trying to coin a phrase here, is that it is an economy of control. The “good” being produced, sold, and bought, is control.

There’s a lot of interesting research about information goods. But I’ve never heard of a “control good”. But this is what we are talking about when we talk about software, data collection, managerial labor, and the conflicts and compromises that it creates.

I have a few intuitions about where this goes, but not as many as I’d like. I think this is because the economy of control is quite messy and hard to reason about.

trust issues and the order of law and technology cf @FrankPasquale

I’ve cut to the last chapter of Pasquale’s The Black Box Society, “Towards an Intelligible Society.” I’m interested in where the argument goes. I see now that I’ve gotten through it that the penultimate chapter has Pasquale’s specific policy recommendations. But as I’m not just reading for policy and framing but also for tone and underlying theoretical commitments, I think it’s worth recording some first impressions before doubling back.

These are some points Pasquale makes in the concluding chapter that I wholeheartedly agree with:

  • A universal basic income would allow more people to engage in high risk activities such as the arts and entrepreneurship and more generally would be great for most people.
  • There should be publicly funded options for finance, search, and information services. A great way to provide these would be to fund the development of open source algorithms for finance and search. I’ve been into this idea for so long and it’s great to see a prominent scholar like Pasquale come to its defense.
  • Regulatory capture (or, as he elaborates following Charles Lindblom, “regulatory circularity”) is a problem. Revolving door participation in government and business makes government regulation an unreliable protector of the public interest.

There is quite a bit in the conclusion about the specifics of regulation the finance industry. There is an impressive amount of knowledge presented about this and I’ll admit much of it is over my head. I’ll probably have a better sense of it if I get to reading the chapter that is specifically about finance.

There are some things that I found bewildering or off-putting.

For example, there is a section on “Restoring Trust” that talks about how an important problem is that we don’t have enough trust in the reputation and search industries. His solution is to increase the penalties that the FTC and FCC can impose on Google and Facebook for its e.g. privacy violations. The current penalties are too trivial to be effective deterrence. But, Pasquale argues,

It is a broken enforcement model, and we have black boxes to thank for much of this. People can’t be outraged by what they can’t understand. And without some public concern about the trivial level of penalties for lawbreaking here, there are no consequences for the politicians ultimately responsible for them.

The logic here is a little mad. Pasquale is saying that people are not outraged enough by search and reputation companies to demand harsher penalties, and this is a problem because people don’t trust these companies enough. The solution is to convince people to trust these companies less–get outraged by them–in order to get them to punish the companies more.

This is a bit troubling, but makes sense based on Pasquale’s theory of regulatory circularity, which turns politics into a tug-of-war between interests:

The dynamic of circularity teaches us that there is no stable static equilibrium to be achieved between regulators and regulated. The government is either pushing industry to realize some public values in its activities (say, by respecting privacy or investing in sustainable growth), or industry is pushing regulators to promote its own interests.

There’s a simplicity to this that I distrust. It suggests for one that there are no public pressures on industry besides the government such as consumer’s buying power. A lot of Pasquale’s arguments depend on the monopolistic power of certain tech giants. But while network effects are strong, it’s not clear whether this is such a problem that consumers have no market buy in. In many cases tech giants compete with each other even when it looks like they aren’t. For example, many many people have both Facebook and Gmail accounts. Since there is somewhat redundant functionality in both, consumers can rather seemlessly allocate their time, which is tied to advertising revenue, according to which service they feel better serves them, or which is best reputationally. So social media (which is a bit like a combination of a search and reputation service) is not a monopoly. Similarly, if people have multiple search options available to them because, say, the have both Siri on their smart phone and can search Google directly, then that provides an alternative search market.

Meanwhile, government officials are also often self-interested. If there is a road to hell for industry that is to provide free web services to people to attain massive scale, then abuse economic lock-in to extract value from customers, then lobby for further rent-seeking, there is a similar road to hell in government. It starts with populist demagoguery, leads to stable government appointment, and then leverages that power for rents in status.

So, power is power. Everybody tries to get power. The question is what you do once you get it, right?

Perhaps I’m reading between the lines too much. Of course, my evaluation of the book should depend most on the concrete policy recommendations which I haven’t gotten to yet. But I find it unfortunate that what seems to be a lot of perfectly sound history and policy analysis is wrapped in a politics of professional identity that I find very counterproductive. The last paragraph of the book is:

Black box services are often wondrous to behold, but our black-box society has become dangerously unstable, unfair, and unproductive. Neither New York quants nor California engineers can deliver a sound economy or a secure society. Those are the tasks of a citizenry, which can perform its job only as well as it understands the stakes.

Implicitly, New York quants and California engineers are not citizens, to Pasquale, a law professor based in Maryland. Do all real citizens live around Washington, DC? Are they all lawyers? If the government were to start providing public information services, either by hosting them themselves or by funding open source alternatives, would he want everyone designing these open algorithms (who would be quants or engineers, I presume) to move to DC? Do citizens really need to understand the stakes in order to get this to happen? When have citizens, en masse, understood anything, really?

Based on what I’ve read so far, The Black Box Society is an expression of a lack of trust in the social and economic power associated with quantification and computing that took off in the past few dot-com booms. Since expressions of lack of trust for these industries is nothing new, one might wonder (under the influence of Foucault) how the quantified order and the critique of the quantified order manage to coexist and recreate a system of discipline that includes both and maintains its power as a complex of superficially agonistic forces. I give sincere credit to Pasquale for advocating both series income redistribution and public investment in open technology as ways of disrupting that order. But when he falls into the trap of engendering partisan distrust, he loses my confidence.

“Transactions that are too complex…to be allowed to exist.” cf @FrankPasquale

I stand corrected; my interpretation of Pasquale in my last post was too narrow. Having completed Chapter One of The Black Box Society (TBBS), Pasquale does not take the naive view that all organizational secrecy should be abolished, as I might have once. Rather, his is a more nuanced perspective.

First, Pasquale distinguishes between three “critical strategies for keeping black boxes closed”, or opacity, “[Pasquale’s] blanket term for remediable incomprehensibility”:

  • Real secrecy establishes a barrier between hidden content and unauthorized access to it.”
  • Legal secrecy obliges those privy to certain information to keep it secret”
  • Obfuscation involves deliberate attempts at concealment when secrecy has been compromised.”

Cutting to the chase by looking at the Pasquale and Bracha “Federal Search Commission” (2008) paper that a number of people have recommended to me, it appears (in my limited reading so far) that Pasquale’s position is not that opacity in general is a problem (because there are of course important uses of opacity that serve the public interest, such as confidentiality). Rather, despite these legitimate uses of opacity there is also the need for public oversight, perhaps through federal regulation. The Federal Government serves the public interest better than the imperfect market for search can provide on its own.

There is perhaps a tension between this 2008 position and what is expressed in Chapter 1 of TBBS in the section “The One-Way Mirror,” which gets I dare say a little conspiratorial about The Powers That Be. “We are increasingly ruled by what former political insider Jeff Connaughton called ‘The Blob,’ a shadowy network of actors who mobilize money and media for private gain, whether acting officially on behalf of business or of government.” Here, Pasquale appears to espouse a strong theory of regulatory capture from which, we we to insist on consistency, a Federal Search Commission would presumably not be exempt. Hence perhaps the role of TBBS in stirring popular sentiment to put political pressure on the elites of The Blob.

Though it is a digression I will note, since it is a pet peeve of mine, Pasquale’s objection to mathematized governance:

“Technocrats and managers cloak contestable value judgments in the garb of ‘science’: thus the insatiable demand for mathematical models that reframe the subtle and subjective conclusions (such as the worth of a worker, service, article, or product) as the inevitable dictate of salient, measurable data. Big data driven decisions may lead to unprecedented profits. But once we use computation not merely to exercise power over things, but also over people, we need to develop a much more robust ethical framework than ‘the Blob’ is now willing to entertain.”

That this sentiment that scientists should not be making political decisions has been articulated since at least as early as Hannah Arendt’s 1958 The Human Condition is an indication that there is nothing particular to Big Data about this anxiety. And indeed, if we think about ‘computation’ as broadly as mathematized, algorithmic thought, then its use for control over people-not-just-things has an even longer history. Lukacs’ 1923 “Reification and the Consciousness of the Proletariat” is a profound critique of Tayloristic scientific factory management that is getting close to being a hundred years old.

Perhaps a robust ethics of quantification has been in the works for some time as well.

Moving past this, by the end of Chapter 1 of TBBS Pasquale gives us the outline of the book and the true crux of his critique, which is the problem of complexity. Whether or not regulators are successful in opening the black boxes of Silicon Valley or Wall Street (or the branches of government that are complicit with Silicon Valley and Wall Street), their efforts will be in vain if what they get back from the organizations they are trying to regulate is too complex for them to understand.

Following the thrust of Pasquale’s argument, we can see that for him, complexity is the result of obfuscation. It is therefore a source of opacity, which as we have noted he has defined as “remediable incomprehensibility”. Pasquale promises to, by the end of the book, give us a game plan for creating, legally, the Intelligible Society. “Transactions that are too complex to explain to outsiders may well be too complex to be allowed to exist.”

This gets us back to the question we started with, which is whether this complexity and incomprehensibility is avoidable. Suppose we were to legislate against institutional complexity: what would that cost us?

Mathematical modeling gives us the tools we need to analyze these kinds of question. Information theory, theory of computational, and complexity theory are all foundational to the technology of telecommunications and data science. People with expertise in understanding complexity and the limitations we have of controlling it are precisely the people who make the ubiquitous algorithms which society depends on today. But this kind of theory rarely makes it into “critical” literature such as TBBS.

I’m drawn to the example of The Social Media Collective’s Critical Algorithm Studies Reading List, which lists Pasquale’s TBBS among many other works, because it opens with precisely the disciplinary gatekeeping that creates what I fear is the blind spot I’m pointing to:

This list is an attempt to collect and categorize a growing critical literature on algorithms as social concerns. The work included spans sociology, anthropology, science and technology studies, geography, communication, media studies, and legal studies, among others. Our interest in assembling this list was to catalog the emergence of “algorithms” as objects of interest for disciplines beyond mathematics, computer science, and software engineering.

As a result, our list does not contain much writing by computer scientists, nor does it cover potentially relevant work on topics such as quantification, rationalization, automation, software more generally, or big data, although these interests are well-represented in these works’ reference sections of the essays themselves.

This area is growing in size and popularity so quickly that many contributions are popping up without reference to work from disciplinary neighbors. One goal for this list is to help nascent scholars of algorithms to identify broader conversations across disciplines and to avoid reinventing the wheel or falling into analytic traps that other scholars have already identified.

This reading list is framed as a tool for scholars, which it no doubt is. But if contributors to this field of scholarship aspire, as Pasquale does, for “critical algorithms studies” to have real policy ramifications, then this disciplinary wall must fall (as I’ve argued this elsewhere).

organizational secrecy and personal privacy as false dichotomy cf @FrankPasquale

I’ve turned from page 2 to page 3 of The Black Box Society (I can be a slow reader). Pasquale sets up the dichotomy around which the drama of the hinges like so:

But while powerful businesses, financial institutions, and government agencies hide their actions behind nondisclosure agreements, “proprietary methods”, and gag rules, our own lives are increasingly open books. Everything we do online is recorded; the only questions lft are to whom the data will be available, and for how long. Anonymizing software may shield us for a little while, but who knows whether trying to hide isn’t the ultimate red flag for watchful authorities? Surveillance cameras, data brokers, sensor networks, and “supercookies” record how fast we drive, what pills we take, what books we read, what websites we visit. The law, so aggressively protective of secrecy in the world of commerce, is increasingly silent when it comes to the privacy of persons.

That incongruity is the focus of this book.

This is a rhetorically powerful paragraph and it captures a lot of trepidation people have about the power of larger organization relative to themselves.

I have been inclined to agree with this perspective for a lot of my life. I used to be the kind of person who thought Everything Should Be Open. Since then, I’ve developed what I think is a more nuanced view of transparency: some secrecy is necessary. It can be especially necessary for powerful organizations and people.

Why?

Well, it depends on the physical properties of information. (Here is an example of how a proper understanding of the mechanics of information can support the transcendent project as opposed to a merely critical project).

Any time you interact with something or somebody else in a meaningful way, you affect the state of each other in probabilistic space. That means there has been some kind of flow of information. If an organization interacts with a lot of people, it is going to absorb information about a lot of people. Recording this information as ‘data’ is something that has been done for a long time because that is what allows organizations to do intelligent things vis a vis the people they interact with. So businesses, financial institutions, and governments recording information about people is nothing new.

Pasquale suggests that this recording is a threat to our privacy, and that the secrecy of the organizations that do the recording gives them power over us. But this is surely a false dichotomy. Why? Because if an organization records information about a lot of people, and then doesn’t maintain some kind of secrecy, then that information is no longer private! To, like, everybody else. In other words, maintaining secrecy is one way of ensuring confidentiality, which is surely an important part of privacy.

I wonder what happens if we continue to read The Black Box society with this link between secrecy, confidentiality, and privacy in mind.

Is the opacity of governance natural? cf @FrankPasquale

I’ve begun reading Frank Pasquale’s The Black Box Society on the recommendation that it’s a good place to start if I’m looking to focus a defense of the role of algorithms in governance.

I’ve barely started and already found lots of juicy material. For example:

Gaps in knowledge, putative and real, have powerful implications, as do the uses that are made of them. Alan Greenspan, once the most powerful central banker in the world, claimed that today’s markets are driven by an “unredeemably opaque” version of Adam Smith’s “invisible hand,” and that no one (including regulators) can ever get “more than a glimpse at the internal workings of the simplest of modern financial systems.” If this is true, libertarian policy would seem to be the only reasonable response. Friedrich von Hayek, a preeminent theorist of laissez-faire, called the “knowledge problem” an insuperable barrier to benevolent government intervention in the economy.

But what if the “knowledge problem” is not an intrinsic aspect of the market, but rather is deliberately encouraged by certain businesses? What if financiers keep their doings opaque on purpose, precisely to avoid and confound regulation? That would imply something very different about the merits of deregulation.

The challenge of the “knowledge problem” is just one example of a general truth: What we do and don’t know about the social (as opposed to the natural) world is not inherent in its nature, but is itself a function of social constructs. Much of what we can find out about companies, governments, or even one another, is governed by law. Laws of privacy, trade secrecy, the so-called Freedom of Information Act–all set limits to inquiry. They rule certain investigations out of the question before they can even begin. We need to ask: To whose benefit?

There are a lot of ideas here. Trying to break them down:

  1. Markets are opaque.
  2. If markets are naturally opaque, that is a reason for libertarian policy.
  3. If markets are not naturally opaque, then they are opaque on purpose, then that’s a reason to regulate in favor of transparency.
  4. As a general social truth, the social world is not naturally opaque but rather opaque or transparent because of social constructs such as law.

We are meant to conclude that markets should be regulated for transparency.

The most interesting claim to me is what I’ve listed as the fourth one, as it conveys a worldview that is both disputable and which carries with it the professional biases we would expect of the author, a Professor of Law. While there are certainly many respects in which this claim is true, I don’t yet believe it has the force necessary to carry the whole logic of this argument. I will be particularly attentive to this point as I read on.

The danger I’m on the lookout for is one where the complexity of the integration of society, which following Beniger I believe to be a natural phenomenon, is treated as a politically motivated social construct and therefore something that should be changed. It is really only the part after the “and therefore” which I’m contesting. It is possible for politically motivated social constructs to be natural phenomena. All institutions have winners and losers relative to their power. Who would a change in policy towards transparency in the market benefit? If opacity is natural, it would shift the opacity to some other part of society, empowering a different group of people. (Possibly lawyers).

If opacity is necessary, then perhaps we could read The Black Box Society as an expression of the general problem of alienation. It is way premature for me to attribute this motivation to Pasquale, but it is a guiding hypothesis that I will bring with me as I read the book.