Digifesto

Tag: jurgen habermas

Hildebrandt (2013) on double contingency in Parsons and Luhmann

I’ve tried to piece together double contingency before, and am finding myself re-encountering these ideas in several projects. I just now happened on this very succinct account of double contingency in Hildebrandt (2013), which I wanted to reproduce here.

Parsons was less interested in personal identity than in the construction of social institutions as proxies for the coordination of human interaction. His point is that the uncertainty that is inherent in the double contingency requires the emergence of social structures that develop a certain autonomy and provide a more stable object for the coordination of human interaction. The circularity that comes with the double contingency is thus resolved in the consensus that is consolidated in sociological institutions that are typical for a particular culture. Consensus on the norms and values that regulate human interaction is Parsons’s solution to the problem of double contingency, and thus explains the existence of social institutions. As could be expected, Parsons’s focus on consensus and his urge to resolve the contingency have been criticized for its ‘past-oriented, objectivist and reified concept of culture’, and for its implicitly negative understanding of the double contingency.

This paragraph says a lot, both about “the problem” posed by “the double contingency”, the possibility of solution through consensus around norms and values, and the rejection of Parsons. It is striking that in the first pages of this article, Hildebrandt begins by challenging “contextual integrity” as a paradigm for privacy (a nod, if not a direct reference, to Nissenbaum (2009)), astutely pointing out that this paradigm makes privacy a matter of delinking data so that it is not reused across contexts. Nissenbaum’s contextual integrity theory depends rather critically on consensus around norms and values; the appropriateness of information norms is a feature of sociological institutions accountable ultimately to shared values. The aim of Parsons, and to some extent also Nissenbaum, is to remove the contingency by establishing reliable institutions.

The criticism of Parsons as being ‘past-oriented, objectivist and reified’ is striking. It opens the question whether Parsons’s concept of culture is too past-oriented, or if some cultures, more than others, may be more past-oriented, rigid, or reified. Consider a continuum of sociological institutions ranging from the rigid, formal, bureaucratized, and traditional to the flexible, casual, improvisational, and innovative. One extreme of these cultures is better conceptualized as “past-oriented” than the other. Furthermore, when cultural evolution becomes embedded in infrastructure, no doubt that culture is more “reified” not just conceptually, but actually, via its transformation into durable and material form. That Hildebrandt offers this criticism of Parsons perhaps foreshadows her later work about the problems of smart information communication infrastructure (Hildebrandt, 2015). Smart infrastructure poses, to those which this orientation, a problem in that it reduces double contingency by being, in fact, a reification of sociological institutions.

“Reification” is a pejorative word in sociology. It refers to a kind of ideological category error with unfortunate social consequences. The more positive view of this kind of durable, even material, culture would be found in Habermas, who would locate legitimacy precisely in the process of consensus. For Habermas, the ideals of legitimate consensus through discursively rational communicative actions finds its imperfect realization in the sociological institution of deliberative democratic law. This is the intellectual inheritor of Kant’s ideal of “perpetual peace”. It is, like the European Union, supposed to be a good thing.

So what about Brexit, so to speak?

Double contingency returns with a vengeance in Luhmann, who famously “debated” Habermas (a more true follower of Parsons), and probably won that debate. Hildebrandt (2013) discusses:

A more productive understanding of double contingency may come from Luhmann (1995), who takes a broader view of contingency; instead of merely defining it in terms of dependency he points to the different options open to subjects who can never be sure how their actions will be interpreted. The uncertainty presents not merely a problem but also a chance; not merely a constraint but also a measure of freedom. The freedom to act meaningfully is constraint [sic] by earlier interactions, because they indicate how one’s actions have been interpreted in the past and thus may be interpreted in the future. Earlier interactions weave into Luhmann’s (1995) emergent social systems, gaining a measure of autonomy — or resistance — with regard to individual participants. Ultimately, however, social systems are still rooted in double contingency of face-to-face communication. The constraints presented by earlier interactions and their uptake in a social system can be rejected and renegotiated in the process of anticipation. By figuring out how one’s actions are mapped by the other, or by social systems in which one participates, room is created to falsify expectations and to disrupt anticipations. This will not necessarily breed anomy, chaos or anarchy, but may instead provide spaces for contestation, self-definition in defiance of labels provided by the expectations of others, and the beginnings of novel or transformed social institutions. As such, the uncertainty inherent in the double contingency defines human autonomy and human identity as relational and even ephemeral, always requiring vigilance and creative invention in the face of unexpected or unreasonably constraining expectations.

Whereas Nissenbaum’s theory of privacy is “admitted conservative”, Hildebrandt’s is grounded in a defense of freedom, invention, and transformation. If either Nissenbaum or Hildebrandt were more inclined to contest each other directly, this may be privacy scholarship’s equivalent of the Habermas/Luhmann debate. However, this is unlikely to occur because the two scholars operate in different legal systems, reducing the stakes of the debate.

We must assume that Hildebrandt, in 2013, would have approved of Brexit, the ultimate defiance of labels and expectations against a Habermasian bureaucratic consensus. Perhaps she also, as would be consistent with this view, has misgivings about the extraterritorial enforcement of the GDPR. Or maybe she would prefer a a global bureaucratic consensus that agreed with Luhmann; but this is a contradiction. This psychologistic speculation is no doubt unproductive.

What is more productive is the pursuit of a synthesis between these poles. As a liberal society, we would like our allocation of autonomy; we often find ourselves in tension with the the bureaucratic systems that, according to rough consensus and running code, are designed to deliver to us our measure of autonomy. Those that overstep their allocation of autonomy, such as those that participated in the most recent Capitol insurrection, are put in prison. Freedom cooexists with law and even order in sometimes uncomfortable ways. There are contests; they are often ugly at the time however much they are glorified retrospectively by their winners as a form of past-oriented validation of the status quo.

References

Hildebrandt, M. (2013). Profile transparency by design?: Re-enabling double contingency. Privacy, due process and the computational turn: The philosophy of law meets the philosophy of technology, 221-46.

Hildebrandt, M. (2015). Smart technologies and the end (s) of law: novel entanglements of law and technology. Edward Elgar Publishing.

Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

A philosophical puzzle: morality with complex rationality

There’s a recurring philosophical puzzle that keeps coming up as one drills into the foundational issues at the heart of technology policy. The more complete articulation of it that I know of is in a draft I’ve written with Jake Goldenfein whose publication was COVID delayed. But here is an abbreviated version of the philosophical problem, distilled perhaps from the tech policy context.

For some reason it all comes back to Kant. The categorical imperative has two versions that are supposed to imply each other:

  • Follow rules that would be agreed on as universal by rational beings.
  • Treat others as ends and not means.

This is elegant and worked quite well while the definitions of ‘rationality’ in play were simple enough that Man could stand at the top of the hierarchy.

Kant is outdated now of course but we can see the influence of this theory in Rawls’s account of liberal ethics (the ‘veil of ignorance’ being a proxy for the reasoning being who has transcended their empirical body), in Habermas’s account of democracy (communicative rationality involving the setting aside of individual interests), and so on. Social contract theories are more or less along these lines. This paradigm is still more or less the gold standard.

There’s a few serious challenges to this moral paradigm. They both relate to how the original model of rationality that it is based on is perhaps naive or so rarefied to be unrealistic. What happens if you deny that people are rational in any disinterested sense, or allow for different levels of rationality? It all breaks down.

On the one hand, there’s various forms of egoism. Sloterdijk argues that Nietzsche stood out partly because he argued for an ethics of self-advancement, which rejected deontological duty. Scandalous. The contemporary equivalent is the reputation of Ayn Rand and those inspired by her. The general idea here is the rejection of social contract. This is frustrating to those who see the social contract as serious and valuable. A key feature of this view is that reason is not, as it is for Kant, disinterested. Rather, it is self-interested. It’s instrumental reason with attendant Humean passions to steer it. The passions need not be too intellectually refined. Romanticism, blah blah.

On the other hand, the 20th century discovers scientifically the idea of bounded rationality. Herbert Simon is the pivotal figure here. Individuals, being bounded, form organizations to transcend their limits. Simon is the grand theorist of managerialism. As far as I know, Simon’s theories are amoral, strictly about the execution of instrumental reason.

Nevertheless, Simon poses a challenge to the universalist paradigm because he reveals the inadequacy of individual humans to self-determine anything of significance. It’s humbling; it also threatens the anthropocentrism that provided the grounds for humanity’s mutual self-respect.

So where does one go from here?

It’s a tough question. Some spitballing:

  • One option is to relocate the philosophical subject from the armchair (Kant) to the public sphere (Habermas) into a new kind of institution that was better equipped to support their cogitation about norms. A public sphere equipped with Bloomberg terminals? But then who provides the terminals? And what about actually existing disparities of access?
    • One implication of this option, following Habermas, is that the communications within it, which would have to include data collection and the application of machine learning, would be disciplined in ways that would prevent defections.
    • Another implication, which is the most difficult one, is that the institution that supports this kind of reasoning would have to acknowledge different roles. These roles would constitute each other relationally–there would need to be a division of labor. But those roles would need to each be able to legitimize their participation on the whole and trust the overall process. This seems most difficult to theorize let alone execute.
  • A different option, sort of the unfinished Nietzschean project, is to develop the individual’s choice to defect into something more magnanimous. Simone de Beauvoir’s widely underrated Ethics of Ambiguity is perhaps the best accomplishment along these lines. The individual, once they overcome their own solipsism and consider their true self-interests at an existential level, come to understand how the success of their projects depends on society because society will outlive them. In a way, this point echoes Simon’s in that it begins from an acknowledgment of human finitude. It reasons from there to a theory of how finite human projects can become infinite (achieving the goal of immortality to the one who initiates them) by being sufficiently prosocial.

Either of these approaches might be superior to “liberalism”, which arguably is stuck in the first paradigm (though I suppose there are many liberal theorists who would defend their position). As a thought experiment, I wonder what public policies motivated by either of these positions would look like.

Notes on Deenan, “Why Liberalism Failed”, Foreward

I’ve begun reading the recently published book, Why Liberalism Failed (2018), by Patrick Deenan. It appears to be making some waves in the political theory commentary. The author claims that it was 10 years in the making but was finished three weeks before the 2016 presidential election, which suggests that the argument within it is prescient.

I’m not far in yet.

There is an intriguing forward from James Davison Hunter and John M. Owen IV, the editors. Their framing of the book is surprisingly continental:

  • They declare that liberalism has arrived at its “legitimacy crisis”, a Habermasian term.
  • They claim that the core contention of the book is a critique of the contradictions within Immanuel Kant’s view of individual autonomy.
  • They compare Deenan with other “radical” critics of liberalism, of which they name: Marx, the Frankfurt School, Foucault, Nietzsche, Schmitt, and the Catholic Church.

In search of a litmus-test like clue as to where in the political spectrum the book falls, I’ve found this passage in the Foreward:

Deneen’s book is disruptive not only for the way it links social maladies to liberalism’s first principles, but also because it is difficult to categorize along our conventional left-right spectrum. Much of what he writes will cheer social democrats and anger free-market advocates; much else will hearten traditionalists and alienate social progressives.

Well, well, well. If we are to fit Deenan’s book into the conceptual 2-by-2 provided in Fraser’s recent work, it appears that Deenan’s political theory is a form of reactionary populism, rejecting progressive neoliberalism. In other words, the Foreward evinces that Deenan’s book is a high-brow political theory contribution that weighs in favor of the kind of politics that has been heretofore only articulated by intellectual pariahs.

social structure and the private sector

The Human Cell

Academic social scientists leaning towards the public intellectual end of the spectrum love to talk about social norms.

This is perhaps motivated by the fact that these intellectual figures are prominent in the public sphere. The public sphere is where these norms are supposed to solidify, and these intellectuals would like to emphasize their own importance.

I don’t exclude myself from this category of persons. A lot of my work has been about social norms and technology design (Benthall, 2014; Benthall, Gürses and Nissenbaum, 2017)

But I also work in the private sector, and it’s striking how differently things look from that perspective. It’s natural for academics who participate more in the public sphere than the private sector to be biased in their view of social structure. From the perspective of being able to accurately understand what’s going on, you have to think about both at once.

That’s challenging for a lot of reasons, one of which is that the private sector is a lot less transparent than the public sphere. In general the internals of actors in the private sector are not open to the scrutiny of commentariat onlookers. Information is one of the many resources traded in pairwise interactions; when it is divulged, it is divulged strategically, introducing bias. So it’s hard to get a general picture of the private sector, even though accounts for a much larger proportion of the social structure that’s available than the public sphere. In other words, public spheres are highly over-represented in analysis of social structure due to the available of public data about them. That is worrisome from an analytic perspective.

It’s well worth making the point that the public/private dichotomy is problematic. Contextual integrity theory (Nissenbaum, 2009) argues that modern society is differentiated among many distinct spheres, each bound by its own social norms. Nissenbaum actually has a quite different notion of norm formation from, say, Habermas. For Nissenbaum, norms evolve over social history, but may be implicit. Contrast this with Habermas’s view that norms are the result of communicative rationality, which is an explicit and linguistically mediated process. The public sphere is a big deal for Habermas. Nissenbaum, a scholar of privacy, reject’s the idea of the ‘public sphere’ simpliciter. Rather, social spheres self-regulate and privacy, which she defines as appropriate information flow, is maintained when information flows according to these multiple self-regulatory regimes.

I believe Nissenbaum is correct on this point of societal differentiation and norm formation. This nuanced understanding of privacy as the differentiated management of information flow challenges any simplistic notion of the public sphere. Does it challenge a simplistic notion of the private sector?

Naturally, the private sector doesn’t exist in a vacuum. In the modern economy, companies are accountable to the law, especially contract law. They have to pay their taxes. They have to deal with public relations and are regulated as to how they manage information flows internally. Employees can sue their employers, etc. So just as the ‘public sphere’ doesn’t permit a total free-for-all of information flow (some kinds of information flow in public are against social norms!), so too does the ‘private sector’ not involve complete secrecy from the public.

As a hypothesis, we can posit that what makes the private sector different is that the relevant social structures are less open in their relations with each other than they are in the public sphere. We can imagine an autonomous social entity like a biological cell. Internally it may have a lot of interesting structure and organelles. Its membrane prevents this complexity leaking out into the aether, or plasma, or whatever it is that human cells float around in. Indeed, this membrane is necessary for the proper functioning of the organelles, which in turn allows the cell to interact properly with other cells to form a larger organism. Echoes of Francisco Varela.

It’s interesting that this may actually be a quantifiable difference. One way of modeling the difference between the internal and external-facing complexity of an entity is using information theory. The more complex internal state of the entity has higher entropy than the membrane. The fact that the membrane causally mediates interactions between the internals and the environment prevents information flow between them; this is captured by the Data Processing Inequality. The lack of information flow between the system internals and externals is quantified as lower mutual information between the two domains. At zero mutual information, the two domains are statistically independent of each other.

I haven’t worked out all the implications of this.

References

Benthall, Sebastian. (2015) Designing Networked Publics for Communicative Action. Jenny Davis & Nathan Jurgenson (eds.) Theorizing the Web 2014 [Special Issue]. Interface 1.1. (link)

Sebastian Benthall, Seda Gürses and Helen Nissenbaum (2017), “Contextual Integrity through the Lens of Computer Science”, Foundations and Trends® in Privacy and Security: Vol. 2: No. 1, pp 1-69. http://dx.doi.org/10.1561/3300000016

Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

Why I will blog more about math in 2018

One reason to study and write about political theory is what Habermas calls the emancipatory interest of human inquiry: to come to better understand the social world one lives in, unclouded by ideology, in order to be more free from those ideological expectations.

This is perhaps counterintuitive since what is perhaps most seductive about political theory is that it is the articulation of so many ideologies. Indeed, one can turn to political theory because one is looking for an ideology that suits them. Having a secure world view is comforting and can provide a sense of purpose. I know that personally I’ve struggled with one after another.

Looking back on my philosophical ‘work’ over the decade years (as opposed to my technical and scientific work) I’d like to declare it an emancipatory success for at least one person, myself. I am happier for it, though at the cost that comes from learning the hard way.

A problem with this blog is that it is too esoteric. It has not been written with a particular academic discipline in mind. It draws rather too heavily from certain big name thinkers that not enough people have read. I don’t provide background material in these thinkers, and so many find this inaccessible.

One day I may try to edit this material into a more accessible version of its arguments. I’m not sure who would find this useful, because much of what I’ve been doing in this work is arriving at the conclusion that actually, truly, mathematical science is the finest way of going about understanding sociotechnical systems. I believe this follows even from deep philosophical engagement with notable critics of this view–and I have truly tried to engage with the best and most notable of these critics. There will always be more of them, but I think at this point I have to make a decision to not seek them out any more. I have tested these views enough to build on them as a secure foundation.

What follows then is a harder but I think more rewarding task of building out the mathematical theory that reflects my philosophical conclusions. This is necessary for, for example, building a technical implementation that expresses the political values that I’ve arrived at. Arguably, until I do this, I’ll have just been beating around the bush.

I will admit to being sheepish about blogging on technical and mathematical topics. This is because in my understanding technical and mathematical writing is held to a higher standard that normal writing. Errors are more clear, and more permanent.

I recognize this now as a personal inhibition and a destructive one. If this blog has been valuable to me as a tool for reading, writing, and developing fluency in obscure philosophical literature, why shouldn’t it also be a tool for reading, writing, and developing fluency in obscure mathematical and technical literature? And to do the latter, shouldn’t I have to take the risk of writing with the same courage, if not abandon?

This is my wish for 2018: to blog more math. It’s a riskier project, but I think I have to in order to keep developing these ideas.

Robert Post on Data vs. Dignitary Privacy

I was able to see Robert Post present his article, “Data Privacy and Dignitary Privacy: Google Spain, the Right to Be Forgotten, and the Construction of the Public Sphere”, today. My other encounter with Post’s work was quite positive, and I was very happy to learn more about his thinking at this talk.

Post’s argument was based off of the facts of the Google Spain SL v. Agencia Española de Protección de Datos (“Google Spain”) case in the EU, which set off a lot of discussion about the right to be forgotten.

I’m not trained as a lawyer, and will leave the legal analysis to the verbatim text. There were some broader philosophical themes that resonate with topics I’ve discussed on this blog andt in my other research. These I wanted to note.

If I follow Post’s argument correctly, it is something like this:

  • According to EU Directive 95/46/EC, there are two kinds of privacy. Data privacy rules over personal data, establishing control and limitations on use of it. The emphasis is on the data itself, which is property reasoned about analogously to. Dignitary privacy is about maintaining appropriate communications between people and restricting those communications that may degrade, humiliate, or mortify them.
  • EU rules about data privacy are governed by rules specifying the purpose for which data is used, thereby implying that the use of this data must be governed by instrumental reason.
  • But there’s the public sphere, which must not be governed by instrumental reason, for Habermasian reasons. The public sphere is, by definition, the domain of communicative action, where actions must be taken with the ambiguous purpose of open dialogue. That is why free expression is constitutionally protected!
  • Data privacy, formulated as an expression of instrumental reason, is incompatible with the free expression of the public sphere.
  • The Google Spain case used data privacy rules to justify the right to be forgotten, and in this it developed an unconvincing and sloppy precedent.
  • Dignitary privacy is in tension with free expression, but not incompatible with it. This is because it is based not on instrumental reason, but rather on norms of communication (which are contextual)
  • Future right to be forgotten decisions should be made on the basis of dignitary privac. This will result in more cogent decisions.

I found Post’s argument very appealing. I have a few notes.

First, I had never made the connection between what Hildebrandt (2013, 2014) calls “purpose binding” in EU data protection regulation and instrumental reason, but there it is. There is a sense in which these purpose clauses are about optimizing something that is externally and specifically defined before the privacy judgment is made (cf. Tschantz, Datta, and Wing, 2012, for a formalization).

This approach seems generally in line with the view of a government as a bureaucracy primarily involved in maintaining control over a territory or population. I don’t mean this in a bad way, but in a literal way of considering control as feedback into a system that steers it to some end. I’ve discussed the pervasive theme of ‘instrumentality run amok’ in questions of AI superintelligence here. It’s a Frankfurt School trope that appears to have made its way in a subtle way into Post’s argument.

The public sphere is not, in Habermasian theory, supposed to be dictated by instrumental reason, but rather by communicative rationality. This has implications for the technical design of networked publics that I’ve scratched the surface of in this paper. By pointing to the tension between instrumental/purpose/control based data protection and the free expression of the public sphere, I believe Post is getting at a deep point about how we can’t have the public sphere be too controlled lest we lose the democratic property of self-governance. It’s a serious argument that probably should be addressed by those who would like to strengthen rights to be forgotten. A similar argument might be made for other contexts whose purposes seem to transcend circumscription, such as science.

Post’s point is not, I believe, to weaken these rights to be forgotten, but rather to put the arguments for them on firmer footing: dignitary privacy, or the norms of communication and the awareness of the costs of violating them. Indeed, the facts behind right to be forgotten cases I’ve heard of (there aren’t many) all seem to fall under these kinds of concerns (humiliation, etc.).

What’s very interesting to me is that the idea of dignitary privacy as consisting of appropriate communication according to contextually specific norms feels very close to Helen Nissenbaum’s theory of Contextual Integrity (2009), with which I’ve become very familiar in past year through my work with Prof. Nissenbaum. Contextual integrity posits that privacy is about adherence to norms of appropriate information flow. Is there a difference between information flow and communication? Isn’t Shannon’s information theory a “mathematical theory of communication”?

The question of whether and under what conditions information flow is communication and/or data are quite deep, actually. More on that later.

For now though it must be noted that there’s a tension, perhaps a dialectical one, between purposes and norms. For Habermas, the public sphere needs to be a space of communicative action, as opposed to instrumental reason. This is because communicative action is how norms are created: through the agreement of people who bracket their individual interests to discuss collective reasons.

Nissenbaum also has a theory of norm formation, but it does not depend so tightly on the rejection of instrumental reason. In fact, it accepts the interests of stakeholders as among several factors that go into the determination of norms. Other factors include societal values, contextual purposes, and the differentiated roles associated with the context. Because contexts, for Nissenbaum, are defined in part by their purposes, this has led Hildebrandt (2013) to make direct comparisons between purpose binding and Contextual Integrity. They are similar, she concludes, but not the same.

It would be easy to say that the public sphere is a context in Nissenbaum’s sense, with a purpose, which is the formation of public opinion (which seems to be Post’s position). Properly speaking, social purposes may be broad or narrow, and specially defined social purposes may be self-referential (why not?), and indeed these self-referential social purposes may be the core of society’s “self-consciousness”. Why shouldn’t there be laws to ensure the freedom of expression within a certain context for the purpose of cultivating the kinds of public opinions that would legitimize laws and cause them to adapt democratically? We could possibly make these frameworks more precise if we could make them a little more formal and could lose some of the baggage; that would be useful theory building in line with Nissenbaum and Post’s broader agendas.

A test of this perhaps more nuanced but still teleological (indeed, instrumental, but maybe actually more properly speaking pragmatic (a la Dewey), in that it can blend several different metaethical categories) is to see if one can motivate a right to be forgotten in a public sphere by appealing to the need for communicative action, thereby especially appropriate communication norms around it, and dignitary privacy.

This doesn’t seem like it should be hard to do at all.

References

Hildebrandt, Mireille. “Slaves to big data. Or are we?.” (2013).

Hildebrandt, Mireille. “Location Data, Purpose Binding and Contextual Integrity: What’s the Message?.” Protection of Information and the Right to Privacy-A New Equilibrium?. Springer International Publishing, 2014. 31-62.

Nissenbaum, Helen. Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press, 2009.

Post, Robert, Data Privacy and Dignitary Privacy: Google Spain, the Right to Be Forgotten, and the Construction of the Public Sphere (April 15, 2017). Duke Law Journal, Forthcoming; Yale Law School, Public Law Research Paper No. 598. Available at SSRN: https://ssrn.com/abstract=2953468 or http://dx.doi.org/10.2139/ssrn.2953468

Tschantz, Michael Carl, Anupam Datta, and Jeannette M. Wing. “Formalizing and enforcing purpose restrictions in privacy policies.” Security and Privacy (SP), 2012 IEEE Symposium on. IEEE, 2012.

A quick recap: from political to individual reasoning about ends

So to recap:

Horkheimer warned in Eclipse of Reason that formalized subjective reason that optimizes means was going to eclipse “objective reason” about social harmony, the good life, the “ends” that really matter. Technical efficacy which is capitalism which is AI would expose how objective reason is based in mythology and so society would be senseless and miserable forever.

There was at one point a critical reaction against formal, technical reason that was called the Science Wars in the 90’s, but though it continues to have intellectual successors it is for the most part self-defeating and powerless. Technical reasoning is powerful because it is true, not true because it is powerful.

It remains an open question whether it’s possible to have a society that steers itself according to something like objective reason. One could argue that Habermas’s project of establishing communicative action as a grounds for legitimate pluralistic democracy was an attempt to show the possibility of objective reason after all. This is, for some reason, an unpopular view in the United States, where democracy is often seen as a way of mediating agonistic interests rather than finding common ones.

But Horkheimer’s Frankfurt School is just one particularly depressing and insightful view. Maybe there is some other way to go. For example, one could decide that society has always been disappointing, and that determining ones true “ends” is an individual, rather than collective, endeavor. Existentialism is one such body of work that posits a substantive moral theory (or at least works at one) that is distrustful of political as opposed to individual solutions.

Existentialism in Design: Motivation

There has been a lot of recent work on the ethics of digital technology. This is a broad area of inquiry, but it includes such topics as:

  • The ethics of Internet research, including the Facebook emotional contagion study and the Encore anti-censorship study.
  • Fairness, accountability, and transparnecy in machine learning.
  • Algorithmic price-gauging.
  • Autonomous car trolley problems.
  • Ethical (Friendly?) AI research? This last one is maybe on the fringe…

If you’ve been reading this blog, you know I’m quite passionate about the intersection of philosophy and technology. I’m especially interested in how ethics can inform the design of digital technology, and how it can’t. My dissertation is exploring this problem in the privacy engineering literature.

I have a some dissatisfaction towards this field which I don’t expect to make it into my dissertation. One is that the privacy engineering literature and academic “ethics of digital technology” more broadly tends to be heavily informed by the law, in the sense of courts, legislatures, and states. This is motivated by the important consideration that technology, and especially technologists, should in a lot of cases be compliant with the law. As a practical matter, it certainly spares technologists the trouble of getting sued.

However, being compliant with the law is not precisely the same things as being ethical. There’s a long ethical tradition of civil disobedience (certain non-violent protest activities, for example) which is not strictly speaking legal though it has certainly had impact on what is considered legal later on. Meanwhile, the point has been made but maybe not often enough that legal language often looks like ethical language, but really shouldn’t be interpreted that way. This is a point made by Oliver Wendell Holmes Junior in his notable essay, “The Path of the Law”.

When the ethics of technology are not being framed in terms of legal requirements, they are often framed in terms of one of two prominent ethical frameworks. One framework is consequentialism: ethics is a matter of maximizing the beneficial consequences and minimizing the harmful consequences of ones actions. One variation of consequentialist ethics is utilitarianism, which attempts to solve ethical questions by reducing them to a calculus over “utility”, or benefit as it is experienced or accrued by individuals. A lot of economics takes this ethical stance. Another, less quantitative variation of consequentialist ethics is present in the research ethics principle that research should maximize benefits and minimize harms to participants.

The other major ethical framework used in discussions of ethics and technology is deontological ethics. These are ethics that are about rights, duties, and obligations. Justifying deontological ethics can be a little trickier than justifying consequentialist ethics. Frequently this is done by invoking social norms, as in the case of Nissenbaum’s contextual integrity theory. Another variation of a deontological theory of ethics is Habermas’s theory of transcendental pragmatics and legitimate norms developed through communicative action. In the ideal case, these norms become encoded into law, though it is rarely true that laws are ideal.

Consequentialist considerations probably make the world a better place in some aggregate sense. Deontological considerations probably maybe the world a fairer or at least more socially agreeable place, as in their modern formulations they tend to result from social truces or compromises. I’m quite glad that these frameworks are taken seriously by academic ethicists and by the law.

However, as I’ve said I find these discussions dissatisfying. This is because I find both consequentialist and deontological ethics to be missing something. They both rely on some foundational assumptions that I believe should be questioned in the spirit of true philosophical inquiry. A more thorough questioning of these assumptions, and tentative answers to them, can be found in existentialist philosophy. Existentialism, I would argue, has not had its due impact on contemporary discourse on ethics and technology, and especially on the questions surrounding ethical technical design. This is a situation I intend to one day remedy. Though Zach Weinersmith has already made a fantastic start:

“Self Driving Car Ethics”, by Weinersmith

SMBC: Autonomous vehicle ethics

What kinds of issues would be raised by existentialism in design? Let me try out a few examples of points made in contemporary ethics of technology discourse and a preliminary existentialist response to them.

Ethical Charge Existentialist Response
A superintelligent artificial intelligence could, if improperly designed, result in the destruction or impairment of all human life. This catastrophic risk must be avoided. (Bostrom, 2014) We are all going to die anyway. There is no catastrophic risk; there is only catastrophic certainty. We cannot make an artificial intelligence that prevents this outcome. We must instead design artificial intelligence that makes life meaningful despite its finitude.
Internet experiments must not direct the browsers of unwitting people to test the URLs of politically sensitive websites. Doing this may lead to those people being harmed for being accidentally associated with the sensitive material. Researchers should not harm people with their experiments. (Narayanan and Zevenbergen, 2015) To be held responsible by a state’s criminal justice system for the actions taken by ones browser, controlled remotely from America, is absurd. This absurdity, which pervades all life, is the real problem, not the suffering potentially caused by the experiment (because suffering in some form is inevitable, whether it is from painful circumstance or from ennui.) What’s most important is the exposure of this absurdity and the potential liberation from false moralistic dogmas that limit human potential.
Use of Big Data to sort individual people, for example in the case of algorithms used to choose among applicants for a job, may result in discrimination against historically disadvantaged and vulnerable groups. Care must be taken to tailor machine learning algorithms to adjust for the political protection of certain classes of people. (Barocas and Selbst, 2016) The egalitarian tendency in ethics which demands that the greatest should invest themselves in the well-being of the weakest is a kind of herd morality, motivated mainly by ressentiment of the disadvantaged who blame the powerful for their frustrations. This form of ethics, which is based on base emotions like pity and envy, is life-negating because it denies the most essential impulse of life: to overcome resistance and to become great. Rather than restrict Big Data’s ability to identify and augment greatness, it should be encouraged. The weak must be supported out of a spirit of generosity from the powerful, not from a curtailment of power.

As a first cut at existentialism’s response to ethical concerns about technology, it may appear that existentialism is more permissive about the use and design of technology than consequentialism and deontology. It is possible that this conclusion will be robust to further investigation. There is a sense in which existentialism may be the most natural philosophical stance for the technologist because a major theme in existentialist thought is the freedom to choose ones values and the importance of overcoming the limitations on ones power and freedom. I’ve argued before that Simone de Beauvoir, who is perhaps the most clear-minded of the existentialists, has the greatest philosophy of science because it respects this purpose of scientific research. There is a vivacity to existentialism that does not sweat the small stuff and thinks big while at the same time acknowledging that suffering and death are inevitable facts of life.

On the other hand, existentialism is a morally demanding line of inquiry precisely because it does not use either easy metaethical heuristics (such as consequentialism or deontology) or the bald realities of the human condition as a stopgap. It demands that we tackle all the hard questions, sometimes acknowledging that they are answerable or answerable only in the negative, and muddle on despite the hardest truths. Its aim is to provide a truer, better morality than the alternatives.

Perhaps this is best illustrated by some questions implied by my earlier “existentialist responses” that address the currently nonexistent field of existentialism in design. These are questions I haven’t yet heard asked by scholars at the intersection of ethics and technology.

  • How could we design an artificial intelligence (or, to make it simpler, a recommendation system) that makes the most meaningful choices for its users?
  • What sort of Internet intervention would be most liberatory for the people affected by it?
  • What technology can best promote generosity from the world’s greatest people as a celebration of power and life?

These are different questions from any that you read about in the news or in the ethical scholarship. I believe they are nevertheless important ones, maybe more important than the ethical questions that are more typically asked. The theoretical frameworks employed by most ethicists make assumptions that obscure what everybody already knows about the distribution of power and its abuses, the inevitability of suffering and death, life’s absurdity and especially the absurdity if moralizing sentiment in the face of the cruelty of reality, and so on. At best, these ethical discussions inform the interpretation and creation of law, but law is not the same as morality and to confuse the two robs morality of what is perhaps most essential component, which is that is grounded meaningfully in the experience of the subject.

In future posts (and, ideally, eventually in a paper derived from those posts), I hope to flesh out more concretely what existentialism in design might look like.

References

Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact.

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. OUP Oxford.

Narayanan, A., & Zevenbergen, B. (2015). No Encore for Encore? Ethical questions for web-based censorship measurement.

Weinersmith, Z. “Self Driving Car Ethics”. Saturday Morning Breakfast Cereal.

Habermas seems quaint right now, but shouldn’t

By chance I was looking up Habermas’s later philosophical work today, like Between Facts and Norms (1992), which has been said to be the culmination of the project he began with The Structural Transformation of the Public Sphere in 1962. In it, he argues that the law is what gives pluralistic states their legitimacy, because the law enshrines the consent of the governed. Power cannot legitimize itself; democratic law is the foundation for the legitimate state.

Habermas’s later work is widely respected in the European Union, which by and large has functioning pluralistic democratic states. Habermas emerged from the Frankfurt School to become a theorist of modern liberalism and was good at it. While it is an empirical question how much education in political theory is tied to the legitimacy and stability of the state, anecdotally we can say that Habermas is a successful theorist and the German-led European Union is, presently, a successful government. For the purposes of this post, let’s assume that this is at least in part due to the fact that citizens are convinced, through the education system, of the legitimacy of their form of government.

In the United States, something different happened. Habermas’s earlier work (such as the The Structural Transformation of the Public Sphere) was introduced to United States intellectuals through a critical lens. Craig Calhoun, for example, argued in 1992 that the politics of identity was more relevant or significant than the politics of deliberation and democratic consensus.

That was over 25 years ago, and that moment was influential in the way political thought has unfolded in Europe and the United States. In my experience, it is very difficult to find support in academia for the view that rational consensus around democratic institutions is a worthwhile thing to study or advocate for. Identity politics and the endless contest of perspectives is much more popular among students and scholars coming out of places like UC Berkeley. In my own department, students were encouraged to read Habermas’s early work in the context of the identity politics critique, but never exposed to the later work that reacted to these critiques constructively to build a theory that was specifically about pluralism, which is what political identities need in order to unify as a legitimate state. There’s a sense in which the whole idea that one should continue a philosophical argument to the point of constructive agreement, despite the hard work and discipline that this demands, was abandoned in favor of an ideology of intellectual diversity that discouraged scrutiny and rigor across boundaries of identity, even in the narrow sense of professional or disciplinary identity.

The problem with this approach to intellectualism is that it is fractious and undermines itself. When these qualities are taken as intellectual virtues, it is no wonder that boorish overconfidence can take advantage of it in an open contest. And indeed the political class in the United States today has been undermined by its inability to justify its own power and institutions in anything but the fragmented arguments of identity politics.

It is a sad state of affairs. I can’t help but feel my generation is intellectually ill-equipped to respond to the very prominent challenges to the legitimacy of the state that are being leveled at it every day. Not to put too fine a point on it, I blame the intellectual laziness of American critical theory and its inability to absorb the insights of Habermas’s later theoretical work.

Addendum 8/7/17a:

It has come to my attention that this post is receiving a relatively large amount of traffic. This seems to happen when I hit a nerve, specifically when I recommend Habermas over identitarianism in the context of UC Berkeley. Go figure. I respectfully ask for comments from any readers. Some have already helped me further my thinking on this subject. Also, I am aware that a Wikipedia link is not the best way to spread understanding of Habermas’s later political theory. I can recommend this book review (Chriss, 1998) of Between Facts and Norms as well as the Habermas entry in the Stanford Encyclopedia of Philosophy which includes a section specifically on Habermasian cosmopolitanism, which seems relevant to the particular situation today.

Addendum 8/7/17b:

I may have guessed wrong. The recent traffic has come from Reddit. Welcome, Redditors!

 

Capital, democracy, and oligarchy

1. Capital

Bourdieu nicely lays out a taxonomy of forms of capital (1986), including economic capital (wealth) which we are all familiar with, as well as cultural capital (skills, elite tastes) and social capital (relationships with others, especially other elites). By saying that all three categories are forms of capital, what he means is that each “is accumulated labor (in its materialized form or its ‘incorporated,’ embodied form) which, when appropriated on a private, i.e., exclusive, basis by agents or groups of agents, enables them to appropriate social energy in the form of reified or living labor.” In his account, capital in all its forms are what give society its structure, including especially its economic structure.

[Capital] is what makes the games of society – not least, the economic game – something other than simple games of chance offering at every moment the possibility of a miracle. Roulette, which holds out the opportunity of winning a lot of money in a short space of time, and therefore of changing one’s social status quasi-instantaneously, and in which the winning of the previous spin of the wheel can be staked and lost at every new spin, gives a fairly accurate image of this imaginary universe of perfect competition or perfect equality of opportunity, a world without inertia, without accumulation, without heredity or acquired properties, in which every moment is perfectly independent of the previous one, every soldier has a marshal’s baton in his knapsack, and every prize can be attained, instantaneously, by everyone, so that at each moment anyone can become anything. Capital, which, in its objectified or embodied forms, takes time to accumulate and which, as a potential capacity to produce profits and to reproduce itself in identical or expanded form, contains a tendency to persist in its being, is a force inscribed in the objectivity of things so that everything is not equally possible or impossible. And the structure of the distribution of the different types and subtypes of capital at a given moment in time represents the immanent structure of the social world, i.e. , the set of constraints, inscribed in the very reality of that world, which govern its functioning in a durable way, determining the chances of success for practices.

Bourdieu is clear in his writing that he does not intend this to be taken as unsubstantiated theoretical posture. Rather, it is a theory he has developed through his empirical research. Obviously, it is also informed by many other significant Western theorists, including Kant and Marx. There is something slightly tautological about the way he defines his terms: if capital is posited to explain all social structure, then any social structure may be explained according to a distribution of capital. This leads Bourdieu to theorize about many forms of capital less obvious than wealth, such as the symbolic capital, like academic degrees.

The costs of such a theory is that it demands that one begin the difficult task of enumerate different forms of capital and, importantly, the ways in which some forms of capital can be converted into others. It is a framework which, in principle, could be used to adequately explain social reality in a properly scientific way, as opposed to other frameworks that seem more intended to maintain the motivation of a political agenda or academic discipline. Indeed there is something “interdisciplinary” about the very proposal to address symbolic and economic power in a way that deals responsibly with their commensurability.

So it has to be posited simultaneously that economic capital is at the root of all the other types of capital and that these transformed, disguised forms of economic capital, never entirely reducible to that definition, produce their most specific effects only to the extent that they conceal (not least from their possessors) the fact that economic capital is at their root, in other words – but only in the last analysis – at the root of their effects. The real logic of the functioning of capital, the conversions from one type to another, and the law of conservation which governs them cannot be understood unless two opposing but equally partial views are superseded: on the one hand, economism, which, on the grounds that every type of capital is reducible in the last analysis to economic capital, ignores what makes the specific efficacy of the other types of capital, and on the other hand, semiologism (nowadays represented by structuralism, symbolic interactionism, or ethnomethodology), which reduces social exchanges to phenomena of communication and ignores the brutal fact of universal reducibility to economics.

[I must comment that after years in an academic environment where sincere intellectual effort seemed effectively boobytrapped by disciplinary trip wires around ethnomethodology, quantification, and so on, this Bourdieusian perspective continues to provide me fresh hope. I’ve written here before about Bourdieu’s Science of Science and Reflexivity (2004), which was a wake up call for me that led to my writing this paper. That has been my main entrypoint into Bourdieu’s thought until now. The essay I’m quoting from now was published at least fifteen years prior and by its 34k citations appears to be a classic. Much of what’s written here will no doubt come across as obvious to the sophisticated reader. It is a symptom of a perhaps haphazard education that leads me to write about it now as if I’ve discovered it; indeed, the personal discovery is genuine for me, and though it is not a particularly old work, reading it and thinking it over carefully does untangle some of the knots in my thinking as I try to understand society and my role in it. Perhaps some of that relief can be shared through writing here.]

Naturally, Bourdieu’s account of capital is more nuanced and harder to measure than an economist’s. But it does not preclude an analysis of economic capital such as Piketty‘s. Indeed, much of the economist’s discussion of human capital, especially technological skill, and its relationship to wages can be mapped to a discussion of a specific form of cultural capital and how it can be converted into economic capital. A helpful aspect of this shift is that it allows one to conceptualize the effects of class, gender, and racial privilege in the transmission of technical skills. Cultural capital is, explicitly in Bourdieu’s account, labor intensive to transmit and often done so informally. Cultural tendencies to transmit this kind of capital preferentially to men instead of women in the family home become a viable explanation for the gender cap in the tech industry. While this is perhaps not a novel explanation, it is a significant one and Bourdieu’s theory helps us formulate it in a specific and testable way that transcends, as he says, both economism and semiologism, which seems productive when one is discussing society in a serious way.

One could also use a Bourdieusian framework to understand innovation spillover effects, as economists like to discuss, or the rise of Silicon Valley’s “Regional Advantage” (Saxenian, 1996), to take a specific case. One of Saxenian’s arguments (as I gloss it) is that Silicon Valley was more economically effective as a region than Route 128 in Massachusetts because the influx of engineers experimenting with new business models and reinvesting their profits into other new technology industries created a confluence of relevant cultural capital (technical skill) and economic capital (venture capital) that allowed the economic capital to be deployed more effectively. In other words, it wasn’t that the engineers in Silicon Valley were better engineers than the engineers in Route 128; it was that the economic capital was being deployed in a way that was less informed by technical knowledge. [Incidentally, if this argument is correct, then in some ways it undermines an argument put forward recently for setting up a “cyber workforce incubator” for the Federal Government in the Bay Area based on the idea that it’s necessary to tap into the labor pool there. If what makes Silicon Valley is smart capital rather than smart engineers, then that explains why there are so many engineers there (they are following the money) but also suggests that the price of technical labor there may be inflated. Engineers elsewhere may be just as good at being part of a cyber workforce. Which is just to say that when Bourdieusian theory is taken seriously, it can have practical policy implications.]

One must imagine, when considering society thus, that one could in principle map out the whole of society and the distribution of capitals within it. I believe Bourdieu does something like this in Distinction (1979), which I haven’t read–it is sadly referred to in the United States as the kind of book that is too dense to read. This is too bad.

But I was going to talk about…

2. Democracy

There are at least two great moments in history when democracy flourished. They have something in common.

One is Ancient Greece. The account of the polis in Hannah Arendt’s The Human Condition (1, cf (2 3) makes the familiar point that the citizens of the Ancient Greek city-state were masters of economically independent households. It was precisely the independence of politics (polis – city) from household economic affairs (oikos – house) that defined political life. Owning capital, in this case land and maybe slaves, was a condition for democratic participation. The democracy, such as it was, was the political unity of otherwise free capital holders.

The other historical moment is the rise of the mercantile class and the emergence of the democratic public sphere, as detailed by Habermas. If the public sphere Habermas described (and to some extent idealized) has been critiqued as being “bourgeois masculinist” (Fraser), that critique is telling. The bourgeoisie were precisely those who were owners of newly activated forms of economic capital–ships, mechanizing technologies, and the like.

If we can look at the public sphere in its original form realistically through the disillusionment of criticism, the need for rational discourse among capital holders was strategically necessary for the bourgeoisie to make strategic decisions about how to collectively allocate their economic capital. The Viewed through the objective lens of information processing and pure strategy, the public sphere was an effective means of economic coordination that complemented the rise of the Weberian bureaucracy, which provided a predictable state and also created new demand for legal professionals and the early information workers: clerks and scriveners and such.

The diversity of professions necessary for the functioning of the modern mercantile state created a diversity of forms of cultural capital that could be exchanged for economic capital. Hence, capital diffused from its concentration in the aristocracy into the hands of the widening class of the bourgeoisie.

Neither the Ancient Greek nor the mercantile democracies were particularly inclusive. Perhaps there is no historical precedent for a fully inclusive democracy. Rather, there is precedent for egalitarian alliances of capital holders in cases where that capital is broadly enough distributed to constitute citizenship as an economic class. Moreover, I must insert here that the Bourdieusian model suggests that citizenship could extend through the diffusion of non-economic forms of capital as well. For example, membership in the clergy was a form of capital taken on by some of the gentry; this came, presumably, with symbolic and social capital. The public sphere creates opportunities for the public socialite that were distinct from the opportunities of the courtier or courtesan. And so on.

However exclusive these democracies were, Fraser’s account of subaltern publics and counterpublics is of course very significant. What about the early workers and womens movements? Arguably these too can be understood in Bourdieusian terms. There were other forms of (social and cultural, if not economic) capital that workers and women in particular had available that provided the basis for their shared political interest and political participation.

What I’m suggesting is that:

  • Historically, the democratic impulse has been about uniting the interests of freeholders of capital.
  • A Bourdieusian understanding of capital allows us to maintain this (analytically helpful) understanding of democracy while also acknowledging the complexity of social structure, through the many forms of capital
  • That the complexity of society through the proliferation of forms of capital is one of, if not the, main mechanism of expanding effective citizenship, which is still conditioned on capital ownership even though we like to pretend it’s not.

Which leads me to my last point, which is about…

3. Oligarchy

If a democracy is a political unity of many different capital holders, what then is oligarchy in contrast?

Oligarchy is rule of the few, especially the rich few.

We know, through Bourdieu, that there are many ways to be rich (not just economic ways). Nevertheless, capital (in its many forms) is very unevenly distributed, which accounts for social structure.

To some extent, it is unrealistic to expect the flattening of this distribution. Society is accumulated history and there has been a lot of history and most of it has been brutally unkind.

However, there have been times when capital (in its many forms) has diffused because of the terms of capital exchange, broadly speaking. The functional separation of different professions was one way in which capital was fragmented into many differently exchangeable forms of cultural, social, and economic capitals. A more complex society is therefore a more democratic one, because of the diversity of forms of capital required to manage it. [I suspect there’s a technically specific way to make this point but don’t know how to do it yet.]

There are some consequences of this.

  1. Inequality in the sense of a very skewed distribution of capital and especially economic capital does in fact undermine democracy. You can’t really be a citizen unless you have enough capital to be able to act (use your labor) in ways that are not fully determined by economic survival. And of course this is not all or nothing; quantity of capital and relative capital do matter even beyond a minimum threshold.
  2. The second is that (1) can’t be the end of the story. Rather, to judge if the capital distribution of e.g. a nation can sustain a democracy, you need to account for many kinds of capital, not just economic capital, and see how these are distribute and exchanged. In other words, it’s necessary to look at the political economy broadly speaking. (But, I think, it’s helpful to do so in terms of ‘forms of capital’.)

One example, which I just learned recently, is this. In the United States, we have an independent judiciary, a third branch of government. This is different from other countries that are allegedly oligarchies, notably Russia but also Rhode Island before 2004. One could ask: is this Separation of Powers important for democracy? The answer is intuitively “yes”, and though I’m sure very smart things have been written to answer the question “why”, I haven’t read them, because I’ve been too busy blogging….

Instead, I have an answer for you based on the preceding argument. It was a new idea for me. It was this: What separation of powers does is its constructs a form of cultural capital associated with professional lawyers which is less exchangeable for economic and other forms of capital than in places where non-independence of the judiciary leads to more regular bribery, graft, and preferential treatment. Because it mediates economic exchanges, this has a massively distortative effect on the ability of economic capital to bulldoze other forms of capital, and the accompanying social structures (and social strictures) that bind it. It also creates a new professional class who can own this kind of capital and thereby accomplish citizenship.

Coda

In this blog post, I’ve suggested that not everybody who, for example, legally has suffrage in nominally democratic state is, in an effective sense, a citizen. Only capital owners can be citizens.

This is not intended in any way to be a normative statement about who should or should not be a citizen. Rather, it is a descriptive statement about how power is distributed in nominal democracies. To be an effective citizen, you need to have some kind of surplus of social power; capital the objectification of that social power.

The project of expanding democracy, if it is to be taken seriously, needs to be understood as the project of expanding capital ownership. This can include the redistribution of economic capital. It can also changing institutions that ground cultural and social capitals in ways that distribute other forms of capital more widely. Diversifying professional roles is a way of doing this.

Nothing I’ve written here is groundbreaking, for sure. It is for me a clearer way to think about these issues than I have had before.