Digifesto

Tag: frank pasquale

Notes on Pasquale, “Tech Platforms and the Knowledge Problem”, 2018

I’ve taken a close look at Frank Pasquale’s recent article, “Tech Platforms and the Knowledge Problem” in American Affairs. This is a topic that Pasquale has had his finger on the pulse of for a long time, and I think with this recent articulation he’s really on to something. It’s an area that’s a bit of an attractor state in tech policy thinking at the moment, and as I appear to be in that mix more than ever before, I wanted to take a minute to parse out Frank’s view of the state of the art.

Here’s the setup: In 1945, Hayek points out that the economy needs to be managed somehow, and that this is the main economic use of information/knowledge. Hayek sees the knowledge as distributed and coordination accomplished through the price mechanism. Today we have giant centralizing organizations like Google and Amazon mediating markets, and it’s possible that these have the kind of ‘central planning’ role that Hayek didn’t want. There is a status quo where these companies run things in an unregulated way. Pasquale, being a bit of a regulatory hawk, not unreasonably thinks this may be disappointing and traces out two different modes of regulatory action that could respond to the alleged tech company dominance.

He does this with a nice binary opposition between Jeffersonians, who want to break up the big companies into smaller ones, and Hamiltonians, who want to keep the companies big but regulate them as utilities. His choice of Proper Nouns is a little odd to me, since many of his Hamiltonians are socialists and that doesn’t sound very Hamiltonian to me, but whatever: what can you do, writing for Americans? This table sums up some of the contrasts. Where I’m introducing new components I’m putting in a question mark (?).

Jeffersonian Hamiltonian
Classical competition Schumpeterian competition
Open Markets Institute, Lina Khan Big is Beautiful, Rob Atkinson, Evgeny Morozov
Fully automated luxury communism
Regulatory capture (?) Natural monopoly
Block mergers: unfair bargaining power Encourage mergers: better service quality
Allow data flows to third parties to reduce market barriers Security feudalism to prevent runaway data
Regulate to increase market barriers
Absentee ownership reduces corporate responsibility Many small companies, each unaccountable with little to lose, reduces corporate responsibility
Bargaining power of massive firms a problem Lobbying power of massive firms a problem (?)
Exit Voice
Monopoly reduces consumer choice Centralized paternalistic AI is better than consumer choice
Monopoly abuses fixed by competition Monopoly abuses fixed by regulation
Distrust complex, obscure corporate accountability Distrust small companies and entrepreneurs
Platforms lower quality; killing competition Platforms improve quality via data size, AI advances; economies of scale
Antitrust law Public utility law
FTC Federal Search Commission?
Libertarianism Technocracy
Capitalism Socialism
Smallholding entrepreneur is hero Responsible regulator/executive is hero

There is a lot going on here, but I think the article does a good job of developing two sides of a dialectic about tech companies and their regulation that’s been emerging. These framings extend beyond the context of the article. A lot of blockchain proponents are Jeffersonian, and their opponents are Hamiltonian, in this schema.

I don’t have much to add at this point except for the observation that it’s very hard to judge the “natural” amount of industrial concentration in these areas in part because of the crudeness of the way we measure concentration. We easily pay attention to the top five or ten companies in a sector. But we do so by ignoring the hundred or thousand or more very small companies. It’s just incorrect to say that there is only one search engine or social network; it’s just that the size distribution for the many many search engines and social networks is very skewed, like a heavy tail or log normal distribution. There may be perfectly neutral, “complex systems” oriented explanations for this distribution that make it very robust even with a number of possible interventions.

If that’s true, there will always be many many small companies and a few market leaders in the tech sector. The small companies will benefit from Jeffersonian policies, and those invested in the market leaders will benefit (in some sense) from Hamiltonian policies. The question of which strategy to take then becomes a political matter: it depends on the self-interest of differently positioned people in the socio-economic matrix. Or, alternatively, there is no tension between pursuing both kinds of policy agenda, because they target different groups that will persist no matter hat regime is in place.

managerialism, continued

I’ve begun preliminary skimmings of Enteman’s Managerialism. It is a dense work of analytic philosophy, thick with argument. Sporadic summaries may not do it justice. That said, the principle of this blog is that the bar for ‘publication’ is low.

According to its introduction, Enteman’s Managerialism is written by a philosophy professor (Willard Enteman) who kept finding that the “great thinkers”–Adam Smith, Karl Marx–and the theories espoused in their writing kept getting debunked by his students. Contemporary examples showed that, contrary to conventional wisdom, the United States was not a capitalist country whose only alternative was socialism. In his observation, the United States in 1993 was neither strictly speaking capitalist, nor was it socialist. There was a theoretical gap that needed to be filled.

One of the concepts reintroduced by Enteman is Robert Dahl‘s concept of polyarchy, or “rule by many”. A polyarchy is neither a dictatorship nor a democracy, but rather is a form of government where many different people with different interests, but then again probably not everybody, is in charge. It represents some necessary but probably insufficient conditions for democracy.

This view of power seems evidently correct in most political units within the United States. Now I am wondering if I should be reading Dahl instead of Enteman. It appears that Dahl was mainly offering this political theory in contrast to a view that posited that political power was mainly held by a single dominant elite. In a polyarchy, power is held by many different kinds of elites in contest with each other. At its democratic best, these elites are responsive to citizen interests in a pluralistic way, and this works out despite the inability of most people to participate in government.

I certainly recommend the Wikipedia articles linked above. I find I’m sympathetic to this view, having come around to something like it myself but through the perhaps unlikely path of Bourdieu.

This still limits the discussion of political power in terms of the powers of particular people. Managerialism, if I’m reading it right, makes the case that individual power is not atomic but is due to organizational power. This makes sense; we can look at powerful individuals having an influence on government, but a more useful lens could look to powerful companies and civil society organizations, because these shape the incentives of the powerful people within them.

I should make a shift I’ve made just now explicit. When we talk about democracy, we are often talking about a formal government, like a sovereign nation or municipal government. But when we talk about powerful organizations in society, we are no longer just talking about elected officials and their appointees. We are talking about several different classes of organizations–businesses, civil society organizations, and governments among them–interacting with each other.

It may be that that’s all there is to it. Maybe Capitalism is an ideology that argues for more power to businesses, Socialism is an ideology that argues for more power to formal government, and Democracy is an ideology that argues for more power to civil society institutions. These are zero-sum ideologies. Managerialism would be a theory that acknowledges the tussle between these sectors at the organizational level, as opposed to at the atomic individual level.

The reason why this is a relevant perspective to engage with today is that there has probably in recent years been a transfer of power (I might say ‘control’) from government to corporations–especially Big Tech (Google, Amazon, Facebook, Apple). Frank Pasquale makes the argument for this in a recent piece. He writes and speaks with a particular policy agenda that is far better researched than this blog post. But a good deal of the work is framed around the surprise that ‘governance’ might shift to a private company in the first place. This is a framing that will always be striking to those who are invested in the politics of the state; the very word “govern” is unmarkedly used for formal government and then surprising when used to refer to something else.

Managerialism, then, may be a way of pointing to an option where more power is held by non-state actors. Crucially, though, managerialism is not the same thing as neoliberalism, because neoliberalism is based on laissez-faire market ideology and contempory information infrastructure oligopolies look nothing like laissez-faire markets! Calling the transfer of power from government to corporation today neoliberalism is quite anachronistic and misleading, really!

Perhaps managerialism, like polyarchy, is a descriptive term of a set of political conditions that does not represent an ideal, but a reality with potential to become an ideal. In that case, it’s worth investigating managerialism more carefully and determining what it is and isn’t, and why it is on the rise.

trust issues and the order of law and technology cf @FrankPasquale

I’ve cut to the last chapter of Pasquale’s The Black Box Society, “Towards an Intelligible Society.” I’m interested in where the argument goes. I see now that I’ve gotten through it that the penultimate chapter has Pasquale’s specific policy recommendations. But as I’m not just reading for policy and framing but also for tone and underlying theoretical commitments, I think it’s worth recording some first impressions before doubling back.

These are some points Pasquale makes in the concluding chapter that I wholeheartedly agree with:

  • A universal basic income would allow more people to engage in high risk activities such as the arts and entrepreneurship and more generally would be great for most people.
  • There should be publicly funded options for finance, search, and information services. A great way to provide these would be to fund the development of open source algorithms for finance and search. I’ve been into this idea for so long and it’s great to see a prominent scholar like Pasquale come to its defense.
  • Regulatory capture (or, as he elaborates following Charles Lindblom, “regulatory circularity”) is a problem. Revolving door participation in government and business makes government regulation an unreliable protector of the public interest.

There is quite a bit in the conclusion about the specifics of regulation the finance industry. There is an impressive amount of knowledge presented about this and I’ll admit much of it is over my head. I’ll probably have a better sense of it if I get to reading the chapter that is specifically about finance.

There are some things that I found bewildering or off-putting.

For example, there is a section on “Restoring Trust” that talks about how an important problem is that we don’t have enough trust in the reputation and search industries. His solution is to increase the penalties that the FTC and FCC can impose on Google and Facebook for its e.g. privacy violations. The current penalties are too trivial to be effective deterrence. But, Pasquale argues,

It is a broken enforcement model, and we have black boxes to thank for much of this. People can’t be outraged by what they can’t understand. And without some public concern about the trivial level of penalties for lawbreaking here, there are no consequences for the politicians ultimately responsible for them.

The logic here is a little mad. Pasquale is saying that people are not outraged enough by search and reputation companies to demand harsher penalties, and this is a problem because people don’t trust these companies enough. The solution is to convince people to trust these companies less–get outraged by them–in order to get them to punish the companies more.

This is a bit troubling, but makes sense based on Pasquale’s theory of regulatory circularity, which turns politics into a tug-of-war between interests:

The dynamic of circularity teaches us that there is no stable static equilibrium to be achieved between regulators and regulated. The government is either pushing industry to realize some public values in its activities (say, by respecting privacy or investing in sustainable growth), or industry is pushing regulators to promote its own interests.

There’s a simplicity to this that I distrust. It suggests for one that there are no public pressures on industry besides the government such as consumer’s buying power. A lot of Pasquale’s arguments depend on the monopolistic power of certain tech giants. But while network effects are strong, it’s not clear whether this is such a problem that consumers have no market buy in. In many cases tech giants compete with each other even when it looks like they aren’t. For example, many many people have both Facebook and Gmail accounts. Since there is somewhat redundant functionality in both, consumers can rather seemlessly allocate their time, which is tied to advertising revenue, according to which service they feel better serves them, or which is best reputationally. So social media (which is a bit like a combination of a search and reputation service) is not a monopoly. Similarly, if people have multiple search options available to them because, say, the have both Siri on their smart phone and can search Google directly, then that provides an alternative search market.

Meanwhile, government officials are also often self-interested. If there is a road to hell for industry that is to provide free web services to people to attain massive scale, then abuse economic lock-in to extract value from customers, then lobby for further rent-seeking, there is a similar road to hell in government. It starts with populist demagoguery, leads to stable government appointment, and then leverages that power for rents in status.

So, power is power. Everybody tries to get power. The question is what you do once you get it, right?

Perhaps I’m reading between the lines too much. Of course, my evaluation of the book should depend most on the concrete policy recommendations which I haven’t gotten to yet. But I find it unfortunate that what seems to be a lot of perfectly sound history and policy analysis is wrapped in a politics of professional identity that I find very counterproductive. The last paragraph of the book is:

Black box services are often wondrous to behold, but our black-box society has become dangerously unstable, unfair, and unproductive. Neither New York quants nor California engineers can deliver a sound economy or a secure society. Those are the tasks of a citizenry, which can perform its job only as well as it understands the stakes.

Implicitly, New York quants and California engineers are not citizens, to Pasquale, a law professor based in Maryland. Do all real citizens live around Washington, DC? Are they all lawyers? If the government were to start providing public information services, either by hosting them themselves or by funding open source alternatives, would he want everyone designing these open algorithms (who would be quants or engineers, I presume) to move to DC? Do citizens really need to understand the stakes in order to get this to happen? When have citizens, en masse, understood anything, really?

Based on what I’ve read so far, The Black Box Society is an expression of a lack of trust in the social and economic power associated with quantification and computing that took off in the past few dot-com booms. Since expressions of lack of trust for these industries is nothing new, one might wonder (under the influence of Foucault) how the quantified order and the critique of the quantified order manage to coexist and recreate a system of discipline that includes both and maintains its power as a complex of superficially agonistic forces. I give sincere credit to Pasquale for advocating both series income redistribution and public investment in open technology as ways of disrupting that order. But when he falls into the trap of engendering partisan distrust, he loses my confidence.

“Transactions that are too complex…to be allowed to exist.” cf @FrankPasquale

I stand corrected; my interpretation of Pasquale in my last post was too narrow. Having completed Chapter One of The Black Box Society (TBBS), Pasquale does not take the naive view that all organizational secrecy should be abolished, as I might have once. Rather, his is a more nuanced perspective.

First, Pasquale distinguishes between three “critical strategies for keeping black boxes closed”, or opacity, “[Pasquale’s] blanket term for remediable incomprehensibility”:

  • Real secrecy establishes a barrier between hidden content and unauthorized access to it.”
  • Legal secrecy obliges those privy to certain information to keep it secret”
  • Obfuscation involves deliberate attempts at concealment when secrecy has been compromised.”

Cutting to the chase by looking at the Pasquale and Bracha “Federal Search Commission” (2008) paper that a number of people have recommended to me, it appears (in my limited reading so far) that Pasquale’s position is not that opacity in general is a problem (because there are of course important uses of opacity that serve the public interest, such as confidentiality). Rather, despite these legitimate uses of opacity there is also the need for public oversight, perhaps through federal regulation. The Federal Government serves the public interest better than the imperfect market for search can provide on its own.

There is perhaps a tension between this 2008 position and what is expressed in Chapter 1 of TBBS in the section “The One-Way Mirror,” which gets I dare say a little conspiratorial about The Powers That Be. “We are increasingly ruled by what former political insider Jeff Connaughton called ‘The Blob,’ a shadowy network of actors who mobilize money and media for private gain, whether acting officially on behalf of business or of government.” Here, Pasquale appears to espouse a strong theory of regulatory capture from which, we we to insist on consistency, a Federal Search Commission would presumably not be exempt. Hence perhaps the role of TBBS in stirring popular sentiment to put political pressure on the elites of The Blob.

Though it is a digression I will note, since it is a pet peeve of mine, Pasquale’s objection to mathematized governance:

“Technocrats and managers cloak contestable value judgments in the garb of ‘science’: thus the insatiable demand for mathematical models that reframe the subtle and subjective conclusions (such as the worth of a worker, service, article, or product) as the inevitable dictate of salient, measurable data. Big data driven decisions may lead to unprecedented profits. But once we use computation not merely to exercise power over things, but also over people, we need to develop a much more robust ethical framework than ‘the Blob’ is now willing to entertain.”

That this sentiment that scientists should not be making political decisions has been articulated since at least as early as Hannah Arendt’s 1958 The Human Condition is an indication that there is nothing particular to Big Data about this anxiety. And indeed, if we think about ‘computation’ as broadly as mathematized, algorithmic thought, then its use for control over people-not-just-things has an even longer history. Lukacs’ 1923 “Reification and the Consciousness of the Proletariat” is a profound critique of Tayloristic scientific factory management that is getting close to being a hundred years old.

Perhaps a robust ethics of quantification has been in the works for some time as well.

Moving past this, by the end of Chapter 1 of TBBS Pasquale gives us the outline of the book and the true crux of his critique, which is the problem of complexity. Whether or not regulators are successful in opening the black boxes of Silicon Valley or Wall Street (or the branches of government that are complicit with Silicon Valley and Wall Street), their efforts will be in vain if what they get back from the organizations they are trying to regulate is too complex for them to understand.

Following the thrust of Pasquale’s argument, we can see that for him, complexity is the result of obfuscation. It is therefore a source of opacity, which as we have noted he has defined as “remediable incomprehensibility”. Pasquale promises to, by the end of the book, give us a game plan for creating, legally, the Intelligible Society. “Transactions that are too complex to explain to outsiders may well be too complex to be allowed to exist.”

This gets us back to the question we started with, which is whether this complexity and incomprehensibility is avoidable. Suppose we were to legislate against institutional complexity: what would that cost us?

Mathematical modeling gives us the tools we need to analyze these kinds of question. Information theory, theory of computational, and complexity theory are all foundational to the technology of telecommunications and data science. People with expertise in understanding complexity and the limitations we have of controlling it are precisely the people who make the ubiquitous algorithms which society depends on today. But this kind of theory rarely makes it into “critical” literature such as TBBS.

I’m drawn to the example of The Social Media Collective’s Critical Algorithm Studies Reading List, which lists Pasquale’s TBBS among many other works, because it opens with precisely the disciplinary gatekeeping that creates what I fear is the blind spot I’m pointing to:

This list is an attempt to collect and categorize a growing critical literature on algorithms as social concerns. The work included spans sociology, anthropology, science and technology studies, geography, communication, media studies, and legal studies, among others. Our interest in assembling this list was to catalog the emergence of “algorithms” as objects of interest for disciplines beyond mathematics, computer science, and software engineering.

As a result, our list does not contain much writing by computer scientists, nor does it cover potentially relevant work on topics such as quantification, rationalization, automation, software more generally, or big data, although these interests are well-represented in these works’ reference sections of the essays themselves.

This area is growing in size and popularity so quickly that many contributions are popping up without reference to work from disciplinary neighbors. One goal for this list is to help nascent scholars of algorithms to identify broader conversations across disciplines and to avoid reinventing the wheel or falling into analytic traps that other scholars have already identified.

This reading list is framed as a tool for scholars, which it no doubt is. But if contributors to this field of scholarship aspire, as Pasquale does, for “critical algorithms studies” to have real policy ramifications, then this disciplinary wall must fall (as I’ve argued this elsewhere).

organizational secrecy and personal privacy as false dichotomy cf @FrankPasquale

I’ve turned from page 2 to page 3 of The Black Box Society (I can be a slow reader). Pasquale sets up the dichotomy around which the drama of the hinges like so:

But while powerful businesses, financial institutions, and government agencies hide their actions behind nondisclosure agreements, “proprietary methods”, and gag rules, our own lives are increasingly open books. Everything we do online is recorded; the only questions lft are to whom the data will be available, and for how long. Anonymizing software may shield us for a little while, but who knows whether trying to hide isn’t the ultimate red flag for watchful authorities? Surveillance cameras, data brokers, sensor networks, and “supercookies” record how fast we drive, what pills we take, what books we read, what websites we visit. The law, so aggressively protective of secrecy in the world of commerce, is increasingly silent when it comes to the privacy of persons.

That incongruity is the focus of this book.

This is a rhetorically powerful paragraph and it captures a lot of trepidation people have about the power of larger organization relative to themselves.

I have been inclined to agree with this perspective for a lot of my life. I used to be the kind of person who thought Everything Should Be Open. Since then, I’ve developed what I think is a more nuanced view of transparency: some secrecy is necessary. It can be especially necessary for powerful organizations and people.

Why?

Well, it depends on the physical properties of information. (Here is an example of how a proper understanding of the mechanics of information can support the transcendent project as opposed to a merely critical project).

Any time you interact with something or somebody else in a meaningful way, you affect the state of each other in probabilistic space. That means there has been some kind of flow of information. If an organization interacts with a lot of people, it is going to absorb information about a lot of people. Recording this information as ‘data’ is something that has been done for a long time because that is what allows organizations to do intelligent things vis a vis the people they interact with. So businesses, financial institutions, and governments recording information about people is nothing new.

Pasquale suggests that this recording is a threat to our privacy, and that the secrecy of the organizations that do the recording gives them power over us. But this is surely a false dichotomy. Why? Because if an organization records information about a lot of people, and then doesn’t maintain some kind of secrecy, then that information is no longer private! To, like, everybody else. In other words, maintaining secrecy is one way of ensuring confidentiality, which is surely an important part of privacy.

I wonder what happens if we continue to read The Black Box society with this link between secrecy, confidentiality, and privacy in mind.

Is the opacity of governance natural? cf @FrankPasquale

I’ve begun reading Frank Pasquale’s The Black Box Society on the recommendation that it’s a good place to start if I’m looking to focus a defense of the role of algorithms in governance.

I’ve barely started and already found lots of juicy material. For example:

Gaps in knowledge, putative and real, have powerful implications, as do the uses that are made of them. Alan Greenspan, once the most powerful central banker in the world, claimed that today’s markets are driven by an “unredeemably opaque” version of Adam Smith’s “invisible hand,” and that no one (including regulators) can ever get “more than a glimpse at the internal workings of the simplest of modern financial systems.” If this is true, libertarian policy would seem to be the only reasonable response. Friedrich von Hayek, a preeminent theorist of laissez-faire, called the “knowledge problem” an insuperable barrier to benevolent government intervention in the economy.

But what if the “knowledge problem” is not an intrinsic aspect of the market, but rather is deliberately encouraged by certain businesses? What if financiers keep their doings opaque on purpose, precisely to avoid and confound regulation? That would imply something very different about the merits of deregulation.

The challenge of the “knowledge problem” is just one example of a general truth: What we do and don’t know about the social (as opposed to the natural) world is not inherent in its nature, but is itself a function of social constructs. Much of what we can find out about companies, governments, or even one another, is governed by law. Laws of privacy, trade secrecy, the so-called Freedom of Information Act–all set limits to inquiry. They rule certain investigations out of the question before they can even begin. We need to ask: To whose benefit?

There are a lot of ideas here. Trying to break them down:

  1. Markets are opaque.
  2. If markets are naturally opaque, that is a reason for libertarian policy.
  3. If markets are not naturally opaque, then they are opaque on purpose, then that’s a reason to regulate in favor of transparency.
  4. As a general social truth, the social world is not naturally opaque but rather opaque or transparent because of social constructs such as law.

We are meant to conclude that markets should be regulated for transparency.

The most interesting claim to me is what I’ve listed as the fourth one, as it conveys a worldview that is both disputable and which carries with it the professional biases we would expect of the author, a Professor of Law. While there are certainly many respects in which this claim is true, I don’t yet believe it has the force necessary to carry the whole logic of this argument. I will be particularly attentive to this point as I read on.

The danger I’m on the lookout for is one where the complexity of the integration of society, which following Beniger I believe to be a natural phenomenon, is treated as a politically motivated social construct and therefore something that should be changed. It is really only the part after the “and therefore” which I’m contesting. It is possible for politically motivated social constructs to be natural phenomena. All institutions have winners and losers relative to their power. Who would a change in policy towards transparency in the market benefit? If opacity is natural, it would shift the opacity to some other part of society, empowering a different group of people. (Possibly lawyers).

If opacity is necessary, then perhaps we could read The Black Box Society as an expression of the general problem of alienation. It is way premature for me to attribute this motivation to Pasquale, but it is a guiding hypothesis that I will bring with me as I read the book.