Category: politics

trust issues and the order of law and technology cf @FrankPasquale

I’ve cut to the last chapter of Pasquale’s The Black Box Society, “Towards an Intelligible Society.” I’m interested in where the argument goes. I see now that I’ve gotten through it that the penultimate chapter has Pasquale’s specific policy recommendations. But as I’m not just reading for policy and framing but also for tone and underlying theoretical commitments, I think it’s worth recording some first impressions before doubling back.

These are some points Pasquale makes in the concluding chapter that I wholeheartedly agree with:

  • A universal basic income would allow more people to engage in high risk activities such as the arts and entrepreneurship and more generally would be great for most people.
  • There should be publicly funded options for finance, search, and information services. A great way to provide these would be to fund the development of open source algorithms for finance and search. I’ve been into this idea for so long and it’s great to see a prominent scholar like Pasquale come to its defense.
  • Regulatory capture (or, as he elaborates following Charles Lindblom, “regulatory circularity”) is a problem. Revolving door participation in government and business makes government regulation an unreliable protector of the public interest.

There is quite a bit in the conclusion about the specifics of regulation the finance industry. There is an impressive amount of knowledge presented about this and I’ll admit much of it is over my head. I’ll probably have a better sense of it if I get to reading the chapter that is specifically about finance.

There are some things that I found bewildering or off-putting.

For example, there is a section on “Restoring Trust” that talks about how an important problem is that we don’t have enough trust in the reputation and search industries. His solution is to increase the penalties that the FTC and FCC can impose on Google and Facebook for its e.g. privacy violations. The current penalties are too trivial to be effective deterrence. But, Pasquale argues,

It is a broken enforcement model, and we have black boxes to thank for much of this. People can’t be outraged by what they can’t understand. And without some public concern about the trivial level of penalties for lawbreaking here, there are no consequences for the politicians ultimately responsible for them.

The logic here is a little mad. Pasquale is saying that people are not outraged enough by search and reputation companies to demand harsher penalties, and this is a problem because people don’t trust these companies enough. The solution is to convince people to trust these companies less–get outraged by them–in order to get them to punish the companies more.

This is a bit troubling, but makes sense based on Pasquale’s theory of regulatory circularity, which turns politics into a tug-of-war between interests:

The dynamic of circularity teaches us that there is no stable static equilibrium to be achieved between regulators and regulated. The government is either pushing industry to realize some public values in its activities (say, by respecting privacy or investing in sustainable growth), or industry is pushing regulators to promote its own interests.

There’s a simplicity to this that I distrust. It suggests for one that there are no public pressures on industry besides the government such as consumer’s buying power. A lot of Pasquale’s arguments depend on the monopolistic power of certain tech giants. But while network effects are strong, it’s not clear whether this is such a problem that consumers have no market buy in. In many cases tech giants compete with each other even when it looks like they aren’t. For example, many many people have both Facebook and Gmail accounts. Since there is somewhat redundant functionality in both, consumers can rather seemlessly allocate their time, which is tied to advertising revenue, according to which service they feel better serves them, or which is best reputationally. So social media (which is a bit like a combination of a search and reputation service) is not a monopoly. Similarly, if people have multiple search options available to them because, say, the have both Siri on their smart phone and can search Google directly, then that provides an alternative search market.

Meanwhile, government officials are also often self-interested. If there is a road to hell for industry that is to provide free web services to people to attain massive scale, then abuse economic lock-in to extract value from customers, then lobby for further rent-seeking, there is a similar road to hell in government. It starts with populist demagoguery, leads to stable government appointment, and then leverages that power for rents in status.

So, power is power. Everybody tries to get power. The question is what you do once you get it, right?

Perhaps I’m reading between the lines too much. Of course, my evaluation of the book should depend most on the concrete policy recommendations which I haven’t gotten to yet. But I find it unfortunate that what seems to be a lot of perfectly sound history and policy analysis is wrapped in a politics of professional identity that I find very counterproductive. The last paragraph of the book is:

Black box services are often wondrous to behold, but our black-box society has become dangerously unstable, unfair, and unproductive. Neither New York quants nor California engineers can deliver a sound economy or a secure society. Those are the tasks of a citizenry, which can perform its job only as well as it understands the stakes.

Implicitly, New York quants and California engineers are not citizens, to Pasquale, a law professor based in Maryland. Do all real citizens live around Washington, DC? Are they all lawyers? If the government were to start providing public information services, either by hosting them themselves or by funding open source alternatives, would he want everyone designing these open algorithms (who would be quants or engineers, I presume) to move to DC? Do citizens really need to understand the stakes in order to get this to happen? When have citizens, en masse, understood anything, really?

Based on what I’ve read so far, The Black Box Society is an expression of a lack of trust in the social and economic power associated with quantification and computing that took off in the past few dot-com booms. Since expressions of lack of trust for these industries is nothing new, one might wonder (under the influence of Foucault) how the quantified order and the critique of the quantified order manage to coexist and recreate a system of discipline that includes both and maintains its power as a complex of superficially agonistic forces. I give sincere credit to Pasquale for advocating both series income redistribution and public investment in open technology as ways of disrupting that order. But when he falls into the trap of engendering partisan distrust, he loses my confidence.

organizational secrecy and personal privacy as false dichotomy cf @FrankPasquale

I’ve turned from page 2 to page 3 of The Black Box Society (I can be a slow reader). Pasquale sets up the dichotomy around which the drama of the hinges like so:

But while powerful businesses, financial institutions, and government agencies hide their actions behind nondisclosure agreements, “proprietary methods”, and gag rules, our own lives are increasingly open books. Everything we do online is recorded; the only questions lft are to whom the data will be available, and for how long. Anonymizing software may shield us for a little while, but who knows whether trying to hide isn’t the ultimate red flag for watchful authorities? Surveillance cameras, data brokers, sensor networks, and “supercookies” record how fast we drive, what pills we take, what books we read, what websites we visit. The law, so aggressively protective of secrecy in the world of commerce, is increasingly silent when it comes to the privacy of persons.

That incongruity is the focus of this book.

This is a rhetorically powerful paragraph and it captures a lot of trepidation people have about the power of larger organization relative to themselves.

I have been inclined to agree with this perspective for a lot of my life. I used to be the kind of person who thought Everything Should Be Open. Since then, I’ve developed what I think is a more nuanced view of transparency: some secrecy is necessary. It can be especially necessary for powerful organizations and people.


Well, it depends on the physical properties of information. (Here is an example of how a proper understanding of the mechanics of information can support the transcendent project as opposed to a merely critical project).

Any time you interact with something or somebody else in a meaningful way, you affect the state of each other in probabilistic space. That means there has been some kind of flow of information. If an organization interacts with a lot of people, it is going to absorb information about a lot of people. Recording this information as ‘data’ is something that has been done for a long time because that is what allows organizations to do intelligent things vis a vis the people they interact with. So businesses, financial institutions, and governments recording information about people is nothing new.

Pasquale suggests that this recording is a threat to our privacy, and that the secrecy of the organizations that do the recording gives them power over us. But this is surely a false dichotomy. Why? Because if an organization records information about a lot of people, and then doesn’t maintain some kind of secrecy, then that information is no longer private! To, like, everybody else. In other words, maintaining secrecy is one way of ensuring confidentiality, which is surely an important part of privacy.

I wonder what happens if we continue to read The Black Box society with this link between secrecy, confidentiality, and privacy in mind.

Is the opacity of governance natural? cf @FrankPasquale

I’ve begun reading Frank Pasquale’s The Black Box Society on the recommendation that it’s a good place to start if I’m looking to focus a defense of the role of algorithms in governance.

I’ve barely started and already found lots of juicy material. For example:

Gaps in knowledge, putative and real, have powerful implications, as do the uses that are made of them. Alan Greenspan, once the most powerful central banker in the world, claimed that today’s markets are driven by an “unredeemably opaque” version of Adam Smith’s “invisible hand,” and that no one (including regulators) can ever get “more than a glimpse at the internal workings of the simplest of modern financial systems.” If this is true, libertarian policy would seem to be the only reasonable response. Friedrich von Hayek, a preeminent theorist of laissez-faire, called the “knowledge problem” an insuperable barrier to benevolent government intervention in the economy.

But what if the “knowledge problem” is not an intrinsic aspect of the market, but rather is deliberately encouraged by certain businesses? What if financiers keep their doings opaque on purpose, precisely to avoid and confound regulation? That would imply something very different about the merits of deregulation.

The challenge of the “knowledge problem” is just one example of a general truth: What we do and don’t know about the social (as opposed to the natural) world is not inherent in its nature, but is itself a function of social constructs. Much of what we can find out about companies, governments, or even one another, is governed by law. Laws of privacy, trade secrecy, the so-called Freedom of Information Act–all set limits to inquiry. They rule certain investigations out of the question before they can even begin. We need to ask: To whose benefit?

There are a lot of ideas here. Trying to break them down:

  1. Markets are opaque.
  2. If markets are naturally opaque, that is a reason for libertarian policy.
  3. If markets are not naturally opaque, then they are opaque on purpose, then that’s a reason to regulate in favor of transparency.
  4. As a general social truth, the social world is not naturally opaque but rather opaque or transparent because of social constructs such as law.

We are meant to conclude that markets should be regulated for transparency.

The most interesting claim to me is what I’ve listed as the fourth one, as it conveys a worldview that is both disputable and which carries with it the professional biases we would expect of the author, a Professor of Law. While there are certainly many respects in which this claim is true, I don’t yet believe it has the force necessary to carry the whole logic of this argument. I will be particularly attentive to this point as I read on.

The danger I’m on the lookout for is one where the complexity of the integration of society, which following Beniger I believe to be a natural phenomenon, is treated as a politically motivated social construct and therefore something that should be changed. It is really only the part after the “and therefore” which I’m contesting. It is possible for politically motivated social constructs to be natural phenomena. All institutions have winners and losers relative to their power. Who would a change in policy towards transparency in the market benefit? If opacity is natural, it would shift the opacity to some other part of society, empowering a different group of people. (Possibly lawyers).

If opacity is necessary, then perhaps we could read The Black Box Society as an expression of the general problem of alienation. It is way premature for me to attribute this motivation to Pasquale, but it is a guiding hypothesis that I will bring with me as I read the book.

cross-cultural links between rebellion and alienation

In my last post I noted that the contemporary American problem that the legitimacy of the state is called into question by distributional inequality is a specifically liberal concern based on certain assumptions about society: that it is a free association of producers who are otherwise autonomous.

Looking back to Arendt, we can find the roots of modern liberalism in the polis of antiquity, where democracy was based on free association of landholding men whose estates gave them autonomy from each other. Since the economics, the science that once concerned itself with managing the household (oikos, house + nomos, managing), has elevated to the primary concern of the state and the organizational principle of society. One way to see the conflict between liberalism and social inequality is as the tension between the ideal of freely associating citizens that together accomplish deeds and the reality of societal integration with its impositions on personal freedom and unequal functional differentiation.

Historically, material autonomy was a condition for citizenship. The promise of liberalism is universal citizenship, or political agency. At first blush, to accomplish this, either material autonomy must be guaranteed for all, or citizenship must be decoupled from material conditions altogether.

The problem with this model is that societal agency, as opposed to political agency, is always conditioned both materially and by society (Does this distinction need to be made?). The progressive political drive has recognized this with its unmasking and contestation of social privilege. The populist right wing political drive has recognized this with its accusations that the formal political apparatus has been captured by elite politicians. Those aspects of citizenship that are guaranteed as universal–the vote and certain liberties–are insufficient for the effective social agency on which political power truly depends. And everybody knows it.

This narrative is grounded in the experience of the United States and, going back, to the history of “The West”. It appears to be a perennial problem over cultural time. There is some evidence that it is also a problem across cultural space. Hanah Arendt argues in On Violence (1969) that the attraction of using violence against a ruling bureaucracy (which is political hypostatization of societal alienation more generally) is cross-cultural.

“[T]he greater the bureaucratization of public life, the greater will be the attraction of violence. In a fully developed bureaucracy there is nobody left with whom one can argue, to whom one can present grievances, on whom the pressures of power can be exerted. Bureaucracy is the form of government in which everybody is deprived of political freedom, of the power to act; for the rule by Nobody is not no-rule, and where all are equally powerless we have tyranny without a tyrant. The crucial feature of the student rebellions around the world is that they are directed everywhere against the ruling bureaucracy. This explains what at first glance seems so disturbing–that the rebellions in the East demand precisely those freedoms of speech and thought that the young rebels in the West say they despise as irrelevant. On the level of ideologies, the whole thing is confusing: it is much less so if we start from the obvious fact that the huge party machines have succeeded everywhere in overruling the voice of citizens, even in countries where freedom of speech and association is still intact.”

The argument here is that the moral instability resulting from alienation from politics and society is a universal problem of modernity that transcends ideology.

This is a big problem if we keep turning over decision-making authority over to algorithms.

We need more Sittlichkeit: Vallier on Picketty and Rawls; Cyril on Surveillance and Democracy; Taylor on Hegel

Kevin Vallier’s critique of Picketty in Bleeding Heart Libertarians (funny name) is mainly a criticism of the idea that economic inequality leads to political stability.

In the course of his rebuttal of Picketty, he brings in some interesting Rawlsian theory which is more broadly important. He distinguishes between power stability, the stability of a state in maintaining itself due to its forcible prevention of resistance by Hobbesian power. “Inherent stability”, or moral stability (Vallier’s term) is “stability for the right reasons”–that comes from the state’s comportment with our sense of justice.

There are lots of other ways of saying the same think in the literature. We can ask if justice is de facto or de jure. We can distinguish, as does Hanah Arendt in On Violence, between power (which she maintains is only what’s rooted in collective action) and violence (which is I guess what Vallier would call ‘Hobbesian power’). In a perhaps more subtle move, we can with Habermas ask what legitimizes the power of the state.

The left-wing zeitgeist at the moment is emphasizing inequality as a problem. While Picketty argues that inequality leads to instability, it’s an open question whether this is in fact the case. There’s no particular reason why a Hobbesian sovereign with swarms of killer drones couldn’t maintain its despotic rule through violence. Probably the real cause for complaint is that this is illegitimate power (if you’re Habermas), or violence not power (if you’re Arendt), or moral instability (if you’re Rawls).

That makes sense. Illegitimate power is the kind of power that one would complain about.

Ok, so now cut to Malkia Cyril’s talk at CFP tying technological surveillance to racism. What better illustration of the problems of inequality in the United States than the history of racist policies towards black people? Cyril acknowledges the benefits of Internet technology in providing tools for activists but suspects that now technology will be used by people in power to maintain power for the sake of profit.

The fourth amendment, for us, is not and has never been about privacy, per se. It’s about sovereignty. It’s about power. It’s about democracy. It’s about the historic and present day overreach of governments and corporations into our lives, in order to facilitate discrimination and disadvantage for the purposes of control; for profit. Privacy, per se, is not the fight we are called to. We are called to this question of defending real democracy, not to this distinction between mass surveillance and targeted surveillance

So there’s a clear problem for Cyril which is that ‘real democracy’ is threatened by technical invasions of privacy. A lot of this is tied to the problem of who owns the technical infrastructure. “I believe in the Internet. But I don’t control it. Someone else does. We need a new civil rights act for the era of big data, and we need it now.” And later:

Last year, New York City Police Commissioner Bill Bratton said 2015 would be the year of technology for law enforcement. And indeed, it has been. Predictive policing has taken hold as the big brother of broken windows policing. Total information awareness has become the goal. Across the country, local police departments are working with federal law enforcement agencies to use advanced technological tools and data analysis to “pre-empt crime”. I have never seen anyone able to pre-empt crime, but I appreciate the arrogance that suggests you can tell the future in that way. I wish, instead, technologists would attempt to pre-empt poverty. Instead, algorithms. Instead, automation. In the name of community safety and national security we are now relying on algorithms to mete out sentences, determine city budgets, and automate public decision-making without any public input. That sounds familiar too. It sounds like Black codes. Like Jim Crow. Like 1963.

My head hurts a little as I read this because while the rhetoric is powerful, the logic is loose. Of course you can do better or worse at preempting crime. You can look at past statistics on crime and extrapolate to the future. Maybe that’s hard but you could do it in worse or better ways. A great way to do that would be, as Cyril suggests, by preempting poverty–which some people try to do, and which can be assisted by algorithmic decision-making. There’s nothing strictly speaking racist about relying on algorithms to make decisions.

So for all that I want to support Cyril’s call for ‘civil rights act for the era of big data’, I can’t figure out from the rhetoric what that would involve or what its intellectual foundations would be.

Maybe there are two kinds of problems here:

  1. A problem of outcome legitimacy. Inequality, for example, might be an outcome that leads to a moral case against the power of the state.
  2. A problem of procedural legitimacy. When people are excluded from the decision-making processes that affect their lives, they may find that to be grounds for a moral objection to state power.

It’s worth making a distinction between these two problems even though they are related. If procedures are opaque and outcomes are unequal, there will naturally be resentment of the procedures and the suspicion that they are discriminatory.

We might ask: what would happen if procedures were transparent and outcomes were still unequal? What would happen if procedures were opaque and outcomes were fair?

One last point…I’ve been dipping into Charles Taylor’s analysis of Hegel because…shouldn’t everybody be studying Hegel? Taylor maintains that Hegel’s political philosophy in The Philosophy of Right (which I’ve never read) is still relevant today despite Hegel’s inability to predict the future of liberal democracy, let alone the future of his native Prussia (which is apparently something of a pain point for Hegel scholars).

Hegel, or maybe Taylor in a creative reinterpretation of Hegel, anticipates the problem of liberal democracy of maintaining the loyalty of its citizens. I can’t really do justice to Taylor’s analysis so I will repeat verbatim with my comments in square brackets.

[Hegel] did not think such a society [of free and interchangeable individuals] was viable, that is, it could not commadn the loyalty, the minimum degree of discipline and acceptance of its ground rules, it could not generate the agreement on fundamentals necessary to carry on. [N.B.: Hegel conflates power stability and moral stability] In this he was not entirely wrong. For in fact the loyal co-operation which modern societies have been able to command of their members has not been mainly a function of the liberty, equality, and popular rule they have incorporated. [N.B. This is a rejection of the idea that outcome and procedural legitimacy are in fact what leads to moral stability.] It has been an underlying belief of the liberal tradition that it was enough to satisfy these principles in order to gain men’s allegiance. But in fact, where they are not partly ‘coasting’ on traditional allegiance, liberal, as all other, modern societies have relied on other forces to keep them together.

The most important of these is, of course, nationalism. Secondly, the ideologies of mobilization have played an important role in some societies, focussing men’s attention and loyalties through the unprecedented future, the building of which is the justification of all present structures (especially that ubiquitous institution, the party).

But thirdly, liberal societies have had their own ‘mythology’, in the sense of a conception of human life and purposes which is expressed in and legitimizes its structures and practices. Contrary to widespread liberal myth, it has not relied on the ‘goods’ it could deliver, be they liberty, equality, or property, to maintain its members loyalty. The belief that this was coming to be so underlay the notion of the ‘end of ideology’ which was fashionable in the fifties.

But in fact what looked like an end of ideology was only a short period of unchallenged reign of a central ideology of liberalism.

This is a lot, but bear with me. What this is leading up to is an analysis of social cohesion in terms of what Hegel called Sittlichkeit, “ethical life” or “ethical order”. I gather that Sittlichkeit is not unlike what we’d call an ideology or worldview in other contexts. But a Sittlichkeit is better than mere ideology, because Sittlichkeit is a view of ethically ordered society and so therefore is somehow incompatible with liberal atomization of the self which of course is the root of alienation under liberal capitalism.

A liberal society which is a going concern has a Sittlichkeit of its own, although paradoxically this is grounded on a vision of things which denies the need for Sittlickeiit and portrays the ideal society as created and sustained by the will of its members. Liberal societies, in other words, are lucky when they do not live up, in this respect, to their own specifications.

If these common meaning fail, then the foundations of liberal society are in danger. And this indeed seems as distinct possibility today. The problem of recovering Sittlichkeit, of reforming a set of institutions and practices with which men can identify, is with us in an acute way in the apathy and alienation of modern society. For instance the central institutions of representative government are challenged by a growing sense that the individual’s vote has no signficance. [c.f. Cyril’s rhetoric of alienation from algorithmic decision-making.]

But then it should not surprise us to find this phenomenon of electoral indifference referred to in [The Philosophy of Right]. For in fact the problem of alienation and the recovery of Sittlichkeit is a central one in Hegel’s theory and any age in which it is on the agenda is one to which Hegel’s though is bound to be relevant. Not that Hegel’s particular solutions are of any interest today. But rather that his grasp of the relations of man to society–of identity and alienation, of differentiation and partial communities–and their evolution through history, gives us an important part of the language we sorely ned to come to grips with this problem in our time.

Charles Taylor wrote all this in 1975. I’d argue that this problem of establishing ethical order to legitimize state power despite alienation from procedure is a perennial one. That the burden of political judgment has been placed most recently on the technology of decision-making is a function of the automation of bureaucratic control (see Beniger) and, it’s awkward to admit, my own disciplinary bias. In particular it seems like what we need is a Sittlichkeit that deals adequately with the causes of inequality in society, which seem poorly understood.

Protected: partiality and ethics

This content is password protected. To view it please enter your password below:

Bostrom and Habermas: technical and political moralities, and the God’s eye view

An intriguing chapter that follows naturally from Nick Bostrom’s core argument is his discussion of machine ethics writ large. He asks: suppose one could install into an omnipotent machine ethical principles, trusting it with the future of humanity. What principles should we install?

What Bostrom accomplishes by positing his Superintelligence (which begins with something simply smarter than humans, and evolves over the course of the book into something that takes over the galaxy) is a return to what has been called “the God’s eye view”. Philosophers once attempted to define truth and morality according to perspective of an omnipotent–often both transcendent and immanent–god. Through the scope of his work, Bostrom has recovered some of these old themes. He does this not only through his discussion of Superintelligence (and positing its existence in other solar systems already) but also through his simulation arguments.

The way I see it, one thing I am doing by challenging the idea of an intelligence explosion and its resulting in a superintelligent singleton is problematizing this recovery of the God’s Eye view. If your future world is governed by many sovereign intelligent systems instead of just one, then ethics are something that have to emerge from political reality. There is something irreducibly difficult about interacting with other intelligences and it’s from this difficulty that we get values, not the other way around. This sort of thinking is much more like Habermas’s mature ethical philosophy.

I’ve written about how to apply Habermas to the design of networked publics that mediate political interactions between citizens. What I built and offer as toy example in that paper, @TheTweetserve, is simplistic but intended just as a proof of concept.

As I continue to read Bostrom, I expect a convergence on principles. “Coherent extrapolated volition” sounds a lot like a democratic governance structure with elected experts at first pass. The question of how to design a governance structure or institution that leverages artificial intelligence appropriately while legitimately serving its users motivates my dissertation research. My research so far has only scratched the surface of this problem.

the state and the household in Chinese antiquity

It’s worthwhile in comparison with Arendt’s discussion of Athenian democracy to consider the ancient Chinese alternative. In Alfred Huang’s commentary on the I Ching, we find this passage:

The ancient sages always applied the principle of managing a household to governing a country. In their view, a country was simply a big household. With the spirit of sincerity and mutual love, one is able to create a harmonious situation anywhere, in any circumstance. In his Analects, Confucius says,

From the loving example of one household,
A whole state becomes loving.
From the courteous manner of one household,
A whole state becomes courteous.

Comparing the history of Europe and the rise of capitalistic bureaucracy with the history of China, where bureaucracy is much older, is interesting. I have comparatively little knowledge of the latter, but it is often said that China does not have the same emphasis on individualism that you find in the West. Security is considered much more important than Freedom.

The reminder that the democratic values proposed by Arendt and Horkheimer are culturally situated is an important one, especially as Horkheimer claims that free burghers are capable of producing art that expresses universal needs.

developing a nuanced view on transparency

I’m a little late to the party, but I think I may at last be developing a nuanced view on transparency. This is a personal breakthrough about the importance of privacy that I owe largely to the education I’m getting at Berkeley’s School of Information.

When I was an undergrad, I also was a student activist around campaign finance reform. Money in politics was the root of all evil. We were told by our older, wiser activist mentors that we were supposed to lay the groundwork for our policy recommendation and then wait for journalists to expose a scandal. That way we could move in to reform.

Then I worked on projects involving open source, open government, open data, open science, etc. The goal of those activities is to make things more open/transparent.

My ideas about transparency as a political, organizational, and personal issue originated in those experiences with those movements and tactics.

There is a “radically open” wing of these movements which thinks that everything should be open. This has been debunked. The primary way to debunk this is to point out that less privileged groups often need privacy for reasons that more privileged advocates of openness have trouble understanding. Classic cases of this include women who are trying to evade stalkers.

This has been expanded to a general critique of “big data” practices. Data is collected from people who are less powerful than people that process that data and act on it. There has been a call to make the data processing practices more transparent to prevent discrimination.

A conclusion I have found it easy to draw until relatively recently is: ok, this is not so hard. What’s important is that we guarantee privacy for those with less power, and enforce transparency on those with more power so that they can be held accountable. Let’s call this “openness for accountability.” Proponents of this view are in my opinion very well-intended, motivated by values like justice, democracy, and equity. This tends to be the perspective of many journalists and open government types especially.

Openness for accountability is not a nuanced view on transparency.

Here are some examples of cases where an openness for accountability view can go wrong:

  • Arguably, the “Gawker Stalker” platform for reporting the location of celebrities was justified by an ‘opennes for accountability’ logic. Jimmy Kimmel’s browbeating of Emily Gould indicates how this can be a problem. Celebrity status is a form of power but also raises ones level of risk because there is a small percentage of the population that for unfathomable reasons goes crazy and threatens and even attacks people. There is a vicious cycle here. If one is perceived to be powerful, then people will feel more comfortable exposing and attacking that person, which increases their celebrity, increasing their perceived power.
  • There are good reasons to be concerned about stereotypes and representation of underprivileged groups. There are also cases where members of those groups do things that conform to those stereotypes. When these are behaviors that are ethically questionable or manipulative, it’s often important organizationally for somebody to know about them and act on them. But transparency about that information would feed the stereotypes that are being socially combated on a larger scale for equity reasons.
  • Members of powerful groups can have aesthetic taste and senses of humor that are offensive or even triggering to less powerful groups. More generally, different social groups will have different and sometimes mutually offensive senses of humor. A certain amount of public effort goes into regulating “good taste” and that is fine. But also, as is well known, art that is in good taste is often bland and fails to probe the depths of the human condition. Understanding the depths of the human condition is important for everybody but especially for powerful people who have to take more responsibility for other humans.
  • This one is based on anecdotal information from a close friend: one reason why Congress is so dysfunctional now is that it is so much more transparent. That transparency means that politicians have to be more wary of how they act so that they don’t alienate their constituencies. But bipartisan negotiation is exactly the sort of thing that alienates partisan constituencies.

If you asked me maybe two years ago, I wouldn’t have been able to come up with these cases. That was partly because of my positionality in society. Though I am a very privileged man, I still perceived myself as an outsider to important systems of power. I wanted to know more about what was going on inside important organizations and was frustrated by my lack of access to it. I was very idealistic about wanting a more fair society.

Now I am getting older, reading more, experiencing more. As I mature, people are trusting me with more sensitive information, and I am beginning to anticipate the kinds of positions I may have later in my career. I have begun to see how my best intentions for making the world a better place are at odds with my earlier belief in openness for accountability.

I’m not sure what to do with this realization. I put a lot of thought into my political beliefs and for a long time they have been oriented around ideas of transparency, openness, and equity. Now I’m starting to see the social necessity of power that maintains its privacy, unaccountable to the public. I’m starting to see how “Public Relations” is important work. A lot of what I had a kneejerk reaction against now makes more sense.

I am in many ways a slow learner. These ideas are not meant to impress anybody. I’m not a privacy scholar or expert. I expect these thoughts are obvious to those with less of an ideological background in this sort of thing. I’m writing this here because I see my current role as a graduate student as participating in the education system. Education requires a certain amount of openness because you can’t learn unless you have access to information and people who are willing to teach you from their experience, especially their mistakes and revisions.

I am also perhaps writing this now because, who knows, maybe one day I will be an unaccountable, secretive, powerful old man. Nobody would believe me if I said all this then.

The Facebook ethics problem is a political problem

So much has been said about the Facebook emotion contagion experiment. Perhaps everything has been said.

The problem with everything having been said is that by an large people’s ethical stances seem predetermined by their habitus.

By which I mean: most people don’t really care. People who care about what happens on the Internet care about it in whatever way is determined by their professional orientation on that matter. Obviously, some groups of people benefit from there being fewer socially imposed ethical restrictions on data scientific practice, either in an industrial or academic context. Others benefit from imposing those ethical restrictions, or cultivating public outrage on the matter.

If this is an ethical issue, what system of ethics are we prepared to use to evaluate it?

You could make an argument from, say, a utilitarian perspective, or a deontological perspective, or even a virtue ethics standpoint. Those are classic moves.

But nobody will listen to what a professionalized academic ethicist will say on the matter. If there’s anybody who does rigorous work on this, it’s probably somebody like Luciano Floridi. His work is great, in my opinion. But I haven’t found any other academics who work in, say, policy that embrace his thinking. I’d love to be proven wrong.

But since Floridi does serious work on information ethics, that’s mainly an inconvenience to pundits. Instead we get heat, not light.

If this process resolves into anything like policy change–either governmental or internally at Facebook–it will because of a process of agonistic politics. “Agonistic” here means fraught with conflicted interests. It may be redundant to modify ‘politics’ with ‘agonistic’ but it makes the point that the moves being made are strategic actions, aimed at gain for ones person or group, more than they are communicative ones, aimed at consensus.

Because e.g. Facebook keeps public discussion fragmented through its EdgeRank algorithm, which even in its well-documented public version is full of apparent political consequences and flaws, there is no way for conversation within the Facebook platform to result in consensus. It is not, as has been observed by others, a public. In a trivial sense, it’s not a public because the data isn’t public. The data is (sort of) private. That’s not a bad thing. It just means that Facebook shouldn’t be where you go to develop a political consensus that could legitimize power.

Twitter is a little better for this, because it’s actually public. Facebook has zero reason to care about the public consensus of people on Twitter though, because those people won’t organize a consumer boycott of Facebook, because they can only reach people that use Twitter.

Facebook is a great–perhaps the greatest–example of what Habermas calls the steering media. “Steering,” because it’s how powerful entities steer public opinion. For Habermas, the steering media control language and therefore culture. When ‘mass’ media control language, citizens no longer use language to form collective will.

For individualized ‘social’ media that is arranged into filter bubbles through relevance algorithms, language is similarly controlled. But rather than having just a single commanding voice, you have the opportunity for every voice to be expressed at once. Through homophily effects in network formation, what you’d expect to see are very intense clusters of extreme cultures that see themselves as ‘normal’ and don’t interact outside of their bubble.

The irony is that the critical left, who should be making these sorts of observations, is itself a bubble within this system of bubbles. Since critical leftism is enacted in commercialized social media which evolves around it, it becomes recuperated in the Situationist sense. Critical outrage is tapped for advertising revenue, which spurs more critical outrage.

The dependence of contemporary criticality on commercial social media for its own diffusion means that, ironically, none of them are able to just quit Facebook like everyone else who has figured out how much Facebook sucks.

It’s not a secret that decentralized communication systems are the solution to this sort of thing. Stanford’s Liberation Tech group captures this ideology rather well. There’s a lot of good work on censorship-resistant systems, distributed messaging systems, etc. For people who are citizens in the free world, many of these alternative communication platforms where we are spared from algorithmic control are very old. Some people still use IRC for chat. I’m a huge fan of mailing lists, myself. Email is the original on-line social media, and ones inbox is ones domain. Everyone who is posting their stuff to Facebook could be posting to a WordPress blog. WordPress, by the way, has a lovely user interface these days and keeps adding “social” features like “liking” and “following”. This goes largely unnoticed, which is too bad, because Automattic, the company the runs WordPress, is really not evil at all.

So there are plenty of solutions to Facebook being bad for manipulative and bad for democracy. Those solutions involve getting people off of Facebook and onto alternative platforms. That’s what a consumer boycott is. That’s how you get companies to stop doing bad stuff, if you don’t have regulatory power.

Obviously the real problem is that we don’t have a less politically problematic technology that does everything we want Facebook to do only not the bad stuff. There are a lot of unsolved technical accomplishments to getting that to work. I think I wrote a social media think piece about this once.

I think a really cool project that everybody who cares about this should be working on is designing and executing on building that alternative to Facebook. That’s a huge project. But just think about how great it would be if we could figure out how to fund, design, build, and market that. These are the big questions for political praxis in the 21st century.


Get every new post delivered to your Inbox.

Join 1,041 other followers