Digifesto

Tag: autonomy

Autonomy as link between privacy and cybersecurity

A key aspect of the European approach to privacy and data protection regulation is that it’s rooted in the idea of an individual’s autonomy. Unlike an American view of privacy which suggests that privacy is important only because it implies some kind of substantive harm—such as reputational loss or discrimination–in European law it’s understood that personal data matters because of its relevance to a person’s self-control.

Autonomy etymologically is “self-law”. It is traditionally associated with the concept of rationality and the ability to commit oneself to duty. My colleague Jake Goldenfein argues that autonomy is the principle that one has the power to express one’s own narrative about oneself, and for that narrative to have power. Uninterpretable and unaccountable surveillance, “nudging”, manipulation, profiling, social sorting, and so on are all in a sense an attack on autonomy. They interfere with the individual’s capacity to self-rule.

It is more rare to connect the idea of autonomy to cybersecurity, though here the etymology of the words also weighs in favor of it. Cyber- has its root in in Greek kybernetes, for steersman, governor, pilot, or rudder. To be secure means to be free from threat. So cybersecurity for a person or organization is the freedom of their (self-control) from external threat. Cybersecurity is the condition of being free to control oneself–to be autonomous.

Understood in this way, privacy is just one kind of cybersecurity: the cybersecurity of the individual person. We can speak additionally of the cybersecurity of a infrastructure, such as a power grid, or of an organization, such as a bank, or of a device, such as a smartphone. What both the privacy and cybersecurity discussions implicate are questions of the ontology of the entities involved and their ability to control themselves and control each other.

The Crevasse: a meditation on accountability of firms in the face of opacity as the complexity of scale

To recap:

(A1) Beneath corporate secrecy and user technical illiteracy, a fundamental source of opacity in “algorithms” and “machine learning” is the complexity of scale, especially scale of data inputs. (Burrell, 2016)

(A2) The opacity of the operation of companies using consumer data makes those consumers unable to engage with them as informed market actors. The consequence has been a “free fall” of market failure (Strandburg, 2013).

(A3) Ironically, this “free” fall has been “free” (zero price) for consumers; they appear to get something for nothing without knowing what has been given up or changed as a consequence (Hoofnagle and Whittington, 2013).

Comments:

(B1) The above line of argument conflates “algorithms”, “machine learning”, “data”, and “tech companies”, as is common in the broad discourse. That this conflation is possible speaks to the ignorance of the scholarly position on these topics, and ignorance that is implied by corporate secrecy, technical illiteracy, and complexity of scale simultaneously. We can, if we choose, distinguish between these factors analytically. But because, from the standpoint of the discourse, the internals are unknown, the general indication of a ‘black box’ organization is intuitively compelling.

(B1a) Giving in to the lazy conflation is an error because it prevents informed and effective praxis. If we do not distinguish between a corporate entity and its multiple internal human departments and technical subsystems, then we may confuse ourselves into thinking that a fair and interpretable algorithm can give us a fair and interpretable tech company. Nothing about the former guarantees the latter because tech companies operate in a larger operational field.

(B2) The opacity as the complexity of scale, a property of the functioning of machine learning algorithms, is also a property of the functioning of sociotechnical organizations more broadly. Universities, for example, are often opaque to themselves, because of their own internal complexity and scale. This is because the mathematics governing opacity as a function of complexity and scale are the same in both technical and sociotechnical systems (Benthall, 2016).

(B3) If we discuss the complexity of firms, as opposed the the complexity of algorithms, we should conclude that firms that are complex due to scale of operations and data inputs (including number of customers) will be opaque and therefore have strategic advantage in the market against less complex market actors (consumers) with stiffer bounds on rationality.

(B4) In other words, big, complex, data rich firms will be smarter than individual consumers and outmaneuver them in the market. That’s not just “tech companies”. It’s part of the MO of every firm to do this. Corporate entities are “artificial general intelligences” and they compete in a complex ecosystem in which consumers are a small and vulnerable part.

Twist:

(C1) Another source of opacity in data is that the meaning of data come from the causal context that generates it. (Benthall, 2018)

(C2) Learning causal structure from observational data is hard, both in terms of being data-intensive and being computationally complex (NP). (c.f. Friedman et al., 1998)

(C3) Internal complexity, for a firm, is not sufficient to be “all-knowing” about the data that is coming it; the firm has epistemic challenges of secrecy, illiteracy, and scale with respect to external complexity.

(C4) This is why many applications of machine learning are overrated and so many “AI” products kind of suck.

(C5) There is, in fact, an epistemic crevasse between all autonomous entities, each containing its own complexity and constituting a larger ecological field that is the external/being/environment for any other autonomy.

To do:

The most promising direction based on this analysis is a deeper read into transaction cost economics as a ‘theory of the firm’. This is where the formalization of the idea that what the Internet changed most are search costs (a kind of transaction cost) should be.

It would be nice if those insights could be expressed in the mathematics of “AI”.

There’s still a deep idea in here that I haven’t yet found the articulation for, something to do with autopoeisis.

References

Benthall, Sebastian. (2016) The Human is the Data Science. Workshop on Developing a Research Agenda for Human-Centered Data Science. Computer Supported Cooperative Work 2016. (link)

Sebastian Benthall. Context, Causality, and Information Flow: Implications for Privacy Engineering, Security, and Data Economics. Ph.D. dissertation. Advisors: John Chuang and Deirdre Mulligan. University of California, Berkeley. 2018.

Burrell, Jenna. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms.” Big Data & Society 3.1 (2016): 2053951715622512.

Friedman, Nir, Kevin Murphy, and Stuart Russell. “Learning the structure of dynamic probabilistic networks.” Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc., 1998.

Hoofnagle, Chris Jay, and Jan Whittington. “Free: accounting for the costs of the internet’s most popular price.” UCLA L. Rev. 61 (2013): 606.

Strandburg, Katherine J. “Free fall: The online market’s consumer preference disconnect.” U. Chi. Legal F. (2013): 95.

open source sustainability and autonomy, revisited

Some recent chats with Chris Holdgraf and colleagues at NYU interested in “critical digital infrastracture” have gotten me thinking again about the sustainability and autonomy of open source projects again.

I’ll admit to having had naive views about this topic in the past. Certainly, doing empirical data science work on open source software projects has given me a firmer perspective on things. Here are what I feel are the hardest earned insights on the matter:

  • There is tremendous heterogeneity in open source software projects. Almost all quantitative features of these projects fall in log-normal distributions. This suggests that the keys to open source software success are myriad and exogenous (how the technology fits in the larger ecosystem, how outside funding and recognition is accomplished, …) rather than endogenous factors (community policies, etc.) While many open source projects start as hobby and unpaid academic projects, those that go on to be successful find one or more funding sources. This funding is an exogenous factor.
  • The most significant exogenous factors to an open source software project’s success are the industrial organization of private tech companies. Developing an open technology is part of the strategic repertoire of these companies: for example, to undermine the position of a monopolist, developing an open source alternative decreases barriers to market entry and allows for a more competitive field in that sector. Another example: Google funded Mozilla for so long arguably to deflect antitrust action over Google Chrome.
  • There is some truth to Chris Kelty’s idea of open source communities as recursive publics, cultures that have autonomy that can assert political independence at the boundaries of other political forces. This autonomy comes from: the way developers of OSS get specific and valuable human capital in the process of working with the software and their communities; the way institutions begin to depend on OSS as part of their technical stack, creating an installed base; and how many different institutions may support the same project, creating competition for the scarce human capital of the developers. Essentially, at the point where the software and the skills needed to deploy it effectively and the community of people with those skills is self-organized, the OSS community has gained some economic and political autonomy. Often this autonomy will manifest itself in some kind of formal organization, whether a foundation, a non-profit, or a company like Redhat or Canonical or Enthought. If the community is large and diverse enough it may have multiple organizations supporting it. This is in principle good for the autonomy of the project but may also reflect political tensions that can lead to a schism or fork.
  • In general, since OSS development is internally most often very fluid, with the primary regulatory mechanism being the fork, the shape of OSS communities is more determined by exogenous factors than endogenous ones. When exogenous demand for the technology rises, the OSS community can find itself with a ‘surplus’, which can be channeled into autonomous operations.

Responsible participation in complex sociotechnical organizations circa 1977 cc @Aelkus @dj_mosfett

Many extant controversies around technology were documented in 1977 by Langdon Winner in Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought. I would go so far as to say most extant controversies, but I don’t think he does anything having to do with gender, for example.

Consider this discussion of moral education of engineers:

“The problems for moral agency created by the complexity of technical systems cast new light on contemporary calls for more ethically aware scientists and engineers. According to a very common and laudable view, part of the education of persons learning advanced scientific skills ought to be a full comprehension of the social implications of their work. Enlightened professionals should have a solid grasp of ethics relevant to their activities. But, one can ask, what good will it do to nourish this moral sensibility and then place the individual in an organizational situation that mocks the very idea of responsible conduct? To pretend that the whole matter can be settled in the quiet reflections of one’s soul while disregarding the context in which the most powerful opportunities for action are made available is a fundamental misunderstanding of the quality genuine responsibility must have.”

A few thoughts.

First, this reminds me of a conversation @Aelkus @dj_mosfett and I had the other day. The question was: who should take moral responsibility for the failures of sociotechnical organizations (conceived of as corporations running a web service technology, for example).

Second, I’ve been convinced again lately (reminded?) of the importance of context. I’ve been looking into Chaiklin and Lave’s Understanding Practice again, which is largely about how it’s important to take context into account when studying any social system that involves learning. More recently than that I’ve been looking into Nissenbaum’s contextual integrity theory. According to her theory, which is now widely used in the design and legal privacy literature, norms of information flow are justified by the purpose of the context in which they are situated. So, for example, in an ethnographic context those norms of information flow most critical for maintaining trusted relationships with one’s subjects are most important.

But in a corporate context, where the purpose of ones context is to maximize shareholder value, wouldn’t the norms of information flow favor those who keep the moral failures of their organization shrouded in the complexity of their machinery be perfectly justified in their actions?

I’m not seriously advocating for this view, of course. I’m just asking it rhetorically, as it seems like it’s a potential weakness in contextual integrity theory that it does not endorse the actions of, for example, corporate whistleblowers. Or is it? Are corporate whistleblowers the same as national whistleblowers? Of Wikileaks?

One way around this would be to consider contexts to be nested or overlapping, with ethics contextualize to those “spaces.” So, a corporate whistleblower would be doing something bad for the company, but good for society, assuming that there wasn’t some larger social cost to the loss of confidence in that company. (It occurs to me that in this sort of situation, perhaps threatening internally to blow the whistle unless the problem is solved would be the responsible strategy. As they say,

Making progress with the horns is permissible
Only for the purpose of punishing one’s own city.

)

Anyway, it’s a cool topic to think about, what an information theoretic account of responsibility would look like. That’s tied to autonomy. I bet it’s doable.

Bourdieu and Horkheimer; towards an economy of control

It occurred to me as I looked over my earliest notes on Horkheimer (almost a year ago!) that Bourdieu’s concept of science as being a social field that formalizes and automates knowledge is Horkheimer’s idea of hell.

The danger Horkheimer (and so many others) saw in capitalist, instrumentalized, scientific society was that it would alienate and overwhelm the individual.

It is possible that society would alienate the individual anyway, though. For example, in the household of antiquity, were slaves unalienated? The privilege of autonomy is one that has always been rare but disproportionately articulated as normal, even a right. In a sense Western Democracies and Republics exist to guarantee autonomy to their citizens. In late modern democracies, autonomy is variable depending on role in society, which is tied to (economic, social, symbolic, etc.) capital.

So maybe the horror of Horkheimer, alienated by scientific advance, is the horror of one whose capital was being devalued by science. His scholarship, his erudition, were isolated and deemed irrelevant by the formal reasoners who had come to power.

As I write this, I am painfully aware that I have spent a lot of time in graduate school reading books and writing about them when I could have been practicing programming and learning more mathematics. My aspirations are to be a scientist, and I am well aware that that requires one to mathematically formalize ones findings–or, equivalently, to program them into a computer. (It goes without saying that computer programming is formalism, is automation, and so its central role in contemporary science or ‘data science’ is almost given to it by definition. It could not have been otherwise.)

Somehow I have been provoked into investing myself in a weaker form of capital, the benefit of which is the understanding that I write here, now.

Theoretically, the point of doing all this work is to be able to identify a societal value and formalize it so that it can be capture in a technical design. Perhaps autonomy is this value. Another might call it freedom. So once again I am reminded of Simone de Beauvoir’s philosophy of science, which has been correct all along.

But perhaps de Beauvoir was naive about the political implications of technology. Science discloses possibilities, the opportunities are distributed unequally because science is socially situated. Inequality leads to more alienation, not less, for all but the scientists. Meanwhile autonomy is not universally valued–some would prefer the comforts of society, of family structure. If free from society, they would choose to reenter it. Much of ones preferences must come from habitus, no?

I am indeed reaching the limits of my ability to consider the problem discursively. The field is too multidimensional, too dynamic. The proper next step is computer simulation.

cross-cultural links between rebellion and alienation

In my last post I noted that the contemporary American problem that the legitimacy of the state is called into question by distributional inequality is a specifically liberal concern based on certain assumptions about society: that it is a free association of producers who are otherwise autonomous.

Looking back to Arendt, we can find the roots of modern liberalism in the polis of antiquity, where democracy was based on free association of landholding men whose estates gave them autonomy from each other. Since the economics, the science that once concerned itself with managing the household (oikos, house + nomos, managing), has elevated to the primary concern of the state and the organizational principle of society. One way to see the conflict between liberalism and social inequality is as the tension between the ideal of freely associating citizens that together accomplish deeds and the reality of societal integration with its impositions on personal freedom and unequal functional differentiation.

Historically, material autonomy was a condition for citizenship. The promise of liberalism is universal citizenship, or political agency. At first blush, to accomplish this, either material autonomy must be guaranteed for all, or citizenship must be decoupled from material conditions altogether.

The problem with this model is that societal agency, as opposed to political agency, is always conditioned both materially and by society (Does this distinction need to be made?). The progressive political drive has recognized this with its unmasking and contestation of social privilege. The populist right wing political drive has recognized this with its accusations that the formal political apparatus has been captured by elite politicians. Those aspects of citizenship that are guaranteed as universal–the vote and certain liberties–are insufficient for the effective social agency on which political power truly depends. And everybody knows it.

This narrative is grounded in the experience of the United States and, going back, to the history of “The West”. It appears to be a perennial problem over cultural time. There is some evidence that it is also a problem across cultural space. Hanah Arendt argues in On Violence (1969) that the attraction of using violence against a ruling bureaucracy (which is political hypostatization of societal alienation more generally) is cross-cultural.

“[T]he greater the bureaucratization of public life, the greater will be the attraction of violence. In a fully developed bureaucracy there is nobody left with whom one can argue, to whom one can present grievances, on whom the pressures of power can be exerted. Bureaucracy is the form of government in which everybody is deprived of political freedom, of the power to act; for the rule by Nobody is not no-rule, and where all are equally powerless we have tyranny without a tyrant. The crucial feature of the student rebellions around the world is that they are directed everywhere against the ruling bureaucracy. This explains what at first glance seems so disturbing–that the rebellions in the East demand precisely those freedoms of speech and thought that the young rebels in the West say they despise as irrelevant. On the level of ideologies, the whole thing is confusing: it is much less so if we start from the obvious fact that the huge party machines have succeeded everywhere in overruling the voice of citizens, even in countries where freedom of speech and association is still intact.”

The argument here is that the moral instability resulting from alienation from politics and society is a universal problem of modernity that transcends ideology.

This is a big problem if we keep turning over decision-making authority over to algorithms.