Digifesto

Category: academia

education and intelligibility

I’ve put my finger on the problem I’ve had with scholarly discourse about intelligibility over the years.

It is so simple, really.

Sometimes, some group of scholars, A, will argue that the work of another group of scholars, B, is unintelligible. Because it is unintelligible, it should not be trusted. Rather, it has to be held accountable to the scholars in A.

Typically, the scholars in B are engaged in some technical science, while the scholars in A are writers.

Scholars in B meanwhile say: well, if you want to understand what we do, then you could always take some courses in it. Here (in the modern day): we’ve made an on-line course which you can take if you want to understand what we do.

The existence of the on-line course or whatever other resources expressing the knowledge of B tend to not impress those in A. If A is persistent, they will come up with reasons why these resources are insufficient, or why there are barriers to people in A making proper use of those resources.

But ultimately, what A is doing is demanding that B make itself understood. What B is offering is education. And though some people are averse to the idea that some things are just inherently hard to understand, this is a minority opinion that is rarely held by, for example, those who have undergone arduous training in B.

Generally speaking, if everybody were educated in B, then there wouldn’t be so much of a reason for demanding its intelligibility. Education, not intelligibility, seems to be the social outcome we would really like here. Naturally, only people in B will really understand how to educate others in B; this leaves those in A with little to say except to demand, as a stopgap, intelligibility.

But what if the only way for A to truly understand B is for A to be educated by B? Or to educate itself in something essentially equivalent to B?

Advertisements

Reason returns to Berkeley

I’ve been struck recently by a subtle shift in messaging at UC Berkeley since Carol T. Christ has become the university’s Chancellor. Incidentally, she is the first woman chancellor of the university, with a research background in Victorian literature. I think both of these things may have something to do with the bold choice she’s made in recent announcements: the inclusion of reason as among the University’s core values.

Notably, the word has made its appearance next to three other terms that have had much more prominence in the university in recent years: equity, inclusion, and diversity. For example, in the following statements:

In “Thoughts on Charlottesville”:

We must now come together to oppose what are dangerous threats to the values we hold dear as a democracy and as a nation. Our shared belief in reason, diversity, equity, and inclusion is what animates and supports our campus community and the University’s academic mission. Now, more than ever, those values are under assault; together we must rise to their defense.

And, strikingly, this message on “Free Speech”:

Nonetheless, defending the right of free speech for those whose ideas we find offensive is not easy. It often conflicts with the values we hold as a community—tolerance, inclusion, reason and diversity. Some constitutionally-protected speech attacks the very identity of particular groups of individuals in ways that are deeply hurtful. However, the right response is not the heckler’s veto, or what some call platform denial. Call toxic speech out for what it is, don’t shout it down, for in shouting it down, you collude in the narrative that universities are not open to all speech. Respond to hate speech with more speech.

The above paragraph comes soon after this one, in which Chancellor Christ defends Free Speech on Millian philosophical grounds:

The philosophical justification underlying free speech, most powerfully articulated by John Stuart Mill in his book On Liberty, rests on two basic assumptions. The first is that truth is of such power that it will always ultimately prevail; any abridgement of argument therefore compromises the opportunity of exchanging error for truth. The second is an extreme skepticism about the right of any authority to determine which opinions are noxious or abhorrent. Once you embark on the path to censorship, you make your own speech vulnerable to it.

This slight change in messaging strikes me as fundamentally wise. In the past year, the university has been wracked by extreme passions and conflicting interests, resulting in bad press externally and I imagine discomfort internally. But this was not unprecedented; the national political bifurcation could take hold at Berkeley precisely because it had for years been, with every noble intention, emphasizing inclusivity and equity without elevating a binding agent that makes diversity meaningful and productive. This was partly due to the influence of late 20th century intellectual trends that burdened “reason” with the historical legacy of those regimes that upheld it as a virtue, which tended to be white and male. There was a time when “reason” was so associated with these powers that the term was used for the purposes of exclusion–i.e. with the claim that new entrants to political and intellectual power were being “unreasonable”.

Times have changed precisely because the exclusionary use of “reason” was a corrupt one; reason in its true sense is impersonal and transcends individual situation even as it is immanent in it. This meaning of reason would be familiar to one steeped in an older literature.

Carol Christ’s wording reflects a 21st century theme which to me gives me profound confidence in Berkeley’s future: the recognition that reason does not oppose inclusion, but rather demands it, just as scientific logic demands properly sampled data. Perhaps the new zeitgeist at Berkeley has something to do with the new Data Science undergraduate curriculum. Given the state of the world, I’m proud to see reason make a comeback.

Differing ethnographic accounts of the effectiveness of technology

I’m curious as I compare two recent papers, one by Christin [2017] and one by Levy [2015], both about the role of technology in society. and backed by ethnographic data.

What interests me is that the two papers both examine the use of algorithms in practice, but they differ in their account of the effectiveness of the algorithms used. Christin emphasizes the way web journalists and legal professionals deliberately undermine the impact of algorithms. Levy discusses how electronic monitoring achieves central organizational control over truckers.

I’m interested in the different framings because, as Christin points out, a central point of contention in the critical scholarship around data and algorithms is the effectiveness of the technology, especially “in practice”. Implicitly if not explicitly, if the technology is not as effective as its advocates say it is, then it is overhyped and this debunking is an accomplishment of the critical and often ethnographic field.

On the other hand, if the technology is effective at control, as Levy’s article argues that it is, then it poses a much more real managerialist threat to worker’s autonomy. Identifying that this is occurring is also a serious accomplishment of the ethnographic field.

What must be recognized, however, is that these two positions contradict each other, at least as general perspectives on data-collection and algorithmic decision-making. The use of a particular technology in a particular place cannot be both so ineffective as to be overhyped and so effective as to constitute a managerialist threat. The substance of the two critiques is at odds with each other, and they call for different pragmatic responses. The former suggests a rhetorical strategy of further debunking, the latter demands a material strategy of changing working conditions.

I have seen both strategies used in critical scholarship, sometimes even in the same article, chapter, or book. I have never seen critical scholars attempt to resolve this difference between themselves using their shared assumptions and methods. I’d like to see more resolution in the ethnographic field on this point.

Correction, 8/10/17:

The apparent tension is resolved on a closer reading of Christin (2017). The argument there is that technology (in the managerialist use common to both papers) is ineffective when its intended use is resisted by those being managed by it.

That shifts the ethnographic challenge to technology away from an attack on the technical quality of the work (which is a non-starter) to accomplish what it is designed to do, but rather on the uncontroversial proposition that the effectiveness of technology depends in part on assumptions on how it will be used, and that these assumptions can be violated.

The political question of to what extent these new technologies should be adopted can then be addressed straightforwardly in terms of whether or not it is fully and properly adopted, or only partially and improperly adopted. Using language like this would be helpful in bridging technical and ethnographic fields.

References

Christin, 2017. “Algorithms in practice: Comparing journalism and criminal justice.” (link)

Levy, 2015. “The Contexts of Control: Information, Power, and Truck-Driving Work.” (link)

Habermas seems quaint right now, but shouldn’t

By chance I was looking up Habermas’s later philosophical work today, like Between Facts and Norms (1992), which has been said to be the culmination of the project he began with The Structural Transformation of the Public Sphere in 1962. In it, he argues that the law is what gives pluralistic states their legitimacy, because the law enshrines the consent of the governed. Power cannot legitimize itself; democratic law is the foundation for the legitimate state.

Habermas’s later work is widely respected in the European Union, which by and large has functioning pluralistic democratic states. Habermas emerged from the Frankfurt School to become a theorist of modern liberalism and was good at it. While it is an empirical question how much education in political theory is tied to the legitimacy and stability of the state, anecdotally we can say that Habermas is a successful theorist and the German-led European Union is, presently, a successful government. For the purposes of this post, let’s assume that this is at least in part due to the fact that citizens are convinced, through the education system, of the legitimacy of their form of government.

In the United States, something different happened. Habermas’s earlier work (such as the The Structural Transformation of the Public Sphere) was introduced to United States intellectuals through a critical lens. Craig Calhoun, for example, argued in 1992 that the politics of identity was more relevant or significant than the politics of deliberation and democratic consensus.

That was over 25 years ago, and that moment was influential in the way political thought has unfolded in Europe and the United States. In my experience, it is very difficult to find support in academia for the view that rational consensus around democratic institutions is a worthwhile thing to study or advocate for. Identity politics and the endless contest of perspectives is much more popular among students and scholars coming out of places like UC Berkeley. In my own department, students were encouraged to read Habermas’s early work in the context of the identity politics critique, but never exposed to the later work that reacted to these critiques constructively to build a theory that was specifically about pluralism, which is what political identities need in order to unify as a legitimate state. There’s a sense in which the whole idea that one should continue a philosophical argument to the point of constructive agreement, despite the hard work and discipline that this demands, was abandoned in favor of an ideology of intellectual diversity that discouraged scrutiny and rigor across boundaries of identity, even in the narrow sense of professional or disciplinary identity.

The problem with this approach to intellectualism is that it is fractious and undermines itself. When these qualities are taken as intellectual virtues, it is no wonder that boorish overconfidence can take advantage of it in an open contest. And indeed the political class in the United States today has been undermined by its inability to justify its own power and institutions in anything but the fragmented arguments of identity politics.

It is a sad state of affairs. I can’t help but feel my generation is intellectually ill-equipped to respond to the very prominent challenges to the legitimacy of the state that are being leveled at it every day. Not to put too fine a point on it, I blame the intellectual laziness of American critical theory and its inability to absorb the insights of Habermas’s later theoretical work.

Addendum 8/7/17a:

It has come to my attention that this post is receiving a relatively large amount of traffic. This seems to happen when I hit a nerve, specifically when I recommend Habermas over identitarianism in the context of UC Berkeley. Go figure. I respectfully ask for comments from any readers. Some have already helped me further my thinking on this subject. Also, I am aware that a Wikipedia link is not the best way to spread understanding of Habermas’s later political theory. I can recommend this book review (Chriss, 1998) of Between Facts and Norms as well as the Habermas entry in the Stanford Encyclopedia of Philosophy which includes a section specifically on Habermasian cosmopolitanism, which seems relevant to the particular situation today.

Addendum 8/7/17b:

I may have guessed wrong. The recent traffic has come from Reddit. Welcome, Redditors!

 

A big, sincere THANK YOU to the anonymous reviewer who rejected my IC2S2 submission

I submitted an abstract to IC2S2 this year. It was a risky abstract so submit: I was trying to enter into a new field; the extended abstract length was maximum three pages; I had some sketches of an argument in mind that were far too large in scope and informed mainly by my dissatisfaction with other fields.

I got the most wonderful negative review from an anonymous reviewer. A careful dissection of my roughshod argument and firm pointers to literature (some of it quite old) where my naive intuitions had already been addressed. It was a brief and expertly written literature review of precisely the questions that I had been grasping at so poorly.

There have been moments in my brief research career where somebody has stepped in out of the blue and put be squarely on the right path. I can count them on one hand. This is one of them. I have enormous gratitude towards these people; my gratitude is not lessened by the anonymity of this reviewer. Likely this was a defining moment in my mental life. Thank you, wherever you are. You’ve set a high bar and one day I hope to pay that favor forward.

industrial technology development and academic research

I now split my time between industrial technology (software) development and academic research.

There is a sense in which both activities are “scientific”. They both require the consistent use of reason and investigation to arrive at reliable forms of knowledge. My industrial and academic specializations are closely enough aligned that both aim to create some form of computational product. These activities are constantly informing one another.

What is the difference between these two activities?

One difference is that industrial work pays a lot better than academic work. This is probably the most salient difference in my experience.

Another difference is that academic work is more “basic” and less “applied”, allowing it to address more speculative questions.

You might think that the latter kind of work is more “fun”. But really, I find both kinds of work fun. Fun-factor is not an important difference for me.

What are other differences?

Here’s one: I find myself emotionally moved and engaged by my academic work in certain ways. I suppose that since my academic work straddles technology research and ethics research (I’m studying privacy-by-design), one thing I’m doing when I do this work is engaging and refining my moral intuitions. This is rewarding.

I do sometimes also feel that it is self-indulgent, because one thing that thinking about ethics isn’t is taking responsibility for real change in the world. And here I’ll express an opinion that is unpopular in academia, which is that being in industry is about taking responsibility for real change in the world. This change can benefit other people, and it’s good when people in industry get paid well because they are doing hard work that entails real risks. Part of the risk is the responsibility that comes with action in an uncertain world.

Another critically important difference between industrial technology development and academic research is that while the knowledge created by the former is designed foremost to be deployed and used, the knowledge created by the latter is designed to be taught. As I get older and more advanced as a researcher, I see that this difference is actually an essential one. Knowledge that is designed to be taught needs to be teachable to students, and students are generally coming from both a shallower and more narrow background than adult professionals. Knowledge that is designed to by deployed and used need only be truly shared by a small number of experienced practitioners. Most of the people affected by the knowledge will be affected by it indirectly, via artifacts. It can be opaque to them.

Industrial technology production changes the way the world works and makes the world more opaque. Academic research changes the way people work, and reveals things about the world that had been hidden or unknown.

When straddling both worlds, it becomes quite clear that while students are taught that academic scientists are at the frontier of knowledge, ahead of everybody else, they are actually far behind what’s being done in industry. The constraint that academic research must be taught actually drags its form of science far behind what’s being done regularly in industry.

This is humbling for academic science. But it doesn’t make it any less important. Rather, in makes it even more important, but not because of the heroic status of academic researchers being at the top of the pyramid of human knowledge. It’s because the health of the social system depends on its renewal through the education system. If most knowledge is held in secret and deployed but not passed on, we will find ourselves in a society that is increasingly mysterious and out of our control. Academic research is about advancing the knowledge that is available for education. It’s effects can take half a generation or longer to come to fruition. Against this long-term signal, the oscillations that happen within industrial knowledge, which are very real, do fade into the background. Though not before having real and often lasting effects.

Responsible participation in complex sociotechnical organizations circa 1977 cc @Aelkus @dj_mosfett

Many extant controversies around technology were documented in 1977 by Langdon Winner in Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought. I would go so far as to say most extant controversies, but I don’t think he does anything having to do with gender, for example.

Consider this discussion of moral education of engineers:

“The problems for moral agency created by the complexity of technical systems cast new light on contemporary calls for more ethically aware scientists and engineers. According to a very common and laudable view, part of the education of persons learning advanced scientific skills ought to be a full comprehension of the social implications of their work. Enlightened professionals should have a solid grasp of ethics relevant to their activities. But, one can ask, what good will it do to nourish this moral sensibility and then place the individual in an organizational situation that mocks the very idea of responsible conduct? To pretend that the whole matter can be settled in the quiet reflections of one’s soul while disregarding the context in which the most powerful opportunities for action are made available is a fundamental misunderstanding of the quality genuine responsibility must have.”

A few thoughts.

First, this reminds me of a conversation @Aelkus @dj_mosfett and I had the other day. The question was: who should take moral responsibility for the failures of sociotechnical organizations (conceived of as corporations running a web service technology, for example).

Second, I’ve been convinced again lately (reminded?) of the importance of context. I’ve been looking into Chaiklin and Lave’s Understanding Practice again, which is largely about how it’s important to take context into account when studying any social system that involves learning. More recently than that I’ve been looking into Nissenbaum’s contextual integrity theory. According to her theory, which is now widely used in the design and legal privacy literature, norms of information flow are justified by the purpose of the context in which they are situated. So, for example, in an ethnographic context those norms of information flow most critical for maintaining trusted relationships with one’s subjects are most important.

But in a corporate context, where the purpose of ones context is to maximize shareholder value, wouldn’t the norms of information flow favor those who keep the moral failures of their organization shrouded in the complexity of their machinery be perfectly justified in their actions?

I’m not seriously advocating for this view, of course. I’m just asking it rhetorically, as it seems like it’s a potential weakness in contextual integrity theory that it does not endorse the actions of, for example, corporate whistleblowers. Or is it? Are corporate whistleblowers the same as national whistleblowers? Of Wikileaks?

One way around this would be to consider contexts to be nested or overlapping, with ethics contextualize to those “spaces.” So, a corporate whistleblower would be doing something bad for the company, but good for society, assuming that there wasn’t some larger social cost to the loss of confidence in that company. (It occurs to me that in this sort of situation, perhaps threatening internally to blow the whistle unless the problem is solved would be the responsible strategy. As they say,

Making progress with the horns is permissible
Only for the purpose of punishing one’s own city.

)

Anyway, it’s a cool topic to think about, what an information theoretic account of responsibility would look like. That’s tied to autonomy. I bet it’s doable.

cultural values in design

As much as I would like to put aside the problem of technology criticism and focus on my empirical work, I find myself unable to avoid the topic. Today I was discussing work with a friend and collaborator who comes from a ‘critical’ perspective. We were talking about ‘values in design’, a subject that we both care about, despite our different backgrounds.

I suggested that one way to think about values in design is to think of a number of agents and their utility functions. Their utility functions capture their values; the design of an artifact can have greater or less utility for the agents in question. They may intentionally or unintentionally design artifacts that serve some but not others. And so on.

Of course, thinking in terms of ‘utility functions’ is common among engineers, economists, cognitive scientists, rational choice theorists in political science, and elsewhere. It is shunned by the critically trained. My friend and colleague was open minded in his consideration of utility functions, but was more concerned with how cultural values might sneak into or be expressed in design.

I asked him to define a cultural value. We debated the term for some time. We reached a reasonable conclusion.

With such a consensus to work with, we began to talk about how such a concept would be applied. He brought up the example of an algorithm claimed by its creators to be objective. But, he asked, could the algorithm have a bias? Would we not expect that it would express, secretly, cultural values?

I confessed that I aspire to design and implement just such algorithms. I think it would be a fine future if we designed algorithms to fairly and objectively arbitrate our political disputes. We have good reasons to think that an algorithm could be more objective than a system of human bureaucracy. While human decision-makers are limited by the partiality of their perspective, we can build infrastructure that accesses and processes data that are beyond an individual’s comprehension. The challenge is to design the system so that it operates kindly and fairly despite its operations being beyond the scope a single person’s judgment. This will require an abstracted understanding of fairness that is not grounded in the politics of partiality.

Suppose a team of people were to design and implement such a program. On what basis would the critics–and there would inevitably be critics–accuse it of being a biased design with embedded cultural values? Besides the obvious but empty criticism that valuing unbiased results is a cultural value, why wouldn’t the reasoned process of design reduce bias?

We resumed our work peacefully.

Nissenbaum the functionalist

Today in Classics we discussed Helen Nissenbaum’s Privacy in Context.

Most striking to me is that Nissenbaum’s privacy framework, contextual integrity theory, depends critically on a functionalist sociological view. A context is defined by its information norms and violations of those norms are judged according to their (non)accordance with the purposes and values of the context. So, for example, the purposes of an educational institution determine what are appropriate information norms within it, and what departures from those norms constitute privacy violations.

I used to think teleology was dead in the sciences. But recently I learned that it is commonplace in biology and popular in ecology. Today I learned that what amounts to a State Philosopher in the U.S. (Nissenbaum’s framework has been more or less adopted by the FTC) maintains a teleological view of social institutions. Fascinating! Even more fascinating that this philosophy corresponds well enough to American law as to be informative of it.

From a “pure” philosophy perspective (which is I will admit simply a vice of mine), it’s interesting to contrast Nissenbaum with…oh, Horkheimer again. Nissenbaum sees ethical behavior (around privacy at least) as being behavior that is in accord with the purpose of ones context. Morality is given by the system. For Horkheimer, the problem is that the system’s purposes subsume the interests of the individual, who is alone the agent who is able to determine what is right and wrong. Horkheimer is a founder of a Frankfurt school, arguably the intellectual ancestor of progressivism. Nissenbaum grounds her work in Burke and her theory is admittedly conservative. Privacy is violated when people’s expectations of privacy are violated–this is coming from U.S. law–and that means people’s contextual expectations carry more weight than an individual’s free-minded beliefs.

The tension could be resolved when free individuals determine the purpose of the systems they participate in. Indeed, Nissenbaum quotes Burke in his approval of established conventions as being the result of accreted wisdom and rationale of past generations. The system is the way it is because it was chosen. (Or, perhaps, because it survived.)

Since Horkheimer’s objection to “the system” is that he believes instrumentality has run amok, thereby causing the system serve a purpose nobody intended for it, his view is not inconsistent with Nissenbaum’s. Nissenbaum, building on Dworkin, sees contextual legitimacy as depending on some kind of political legitimacy.

The crux of the problem is the question of what information norms comprise the context in which political legitimacy is formed, and what purpose does this context or system serve?

And now for something completely different: Superintelligence and the social sciences

This semester I’ll be co-organizing, with Mahendra Prasad, a seminar on the subject of “Superintelligence and the Social Sciences”.

How I managed to find myself in this role is a bit of a long story. But as I’ve had a longstanding curiosity about this topic, I am glad to be putting energy into the seminar. It’s a great opportunity to get exposure to some of the very interesting work done by MIRI on this subject. It’s also a chance to thoroughly investigate (and critique) Bostrom’s book Superintelligence: Paths, Dangers, and Strategies.

I find the subject matter perplexing because in many ways it forces the very cultural and intellectual clash that I’ve been preoccupied with elsewhere on this blog: the failure of social scientists and engineers to communicate. Or, perhaps, the failure of qualitative researchers and quantitative researchers to communicate. Whatever you want to call it.

Broadly, the question at stake is: what impact will artificial intelligence have on society? This question is already misleading since in the imagination of most people who haven’t been trained in the subject, “artificial intelligence” refers to something of a science fiction scenario, whereas to practitioner, “artificial intelligence” is, basically, just software. Just as the press went wild last year speculating about “algorithms”, by which it meant software, so too is the press excited about artificial intelligence, which is just software.

But the concern that software is responsible for more and more of the activity in the world and that it is in a sense “smarter than us”, and especially the fear that it might become vastly smarter than us (i.e. turning into what Bostrom calls a “superintelligence”), is pervasive enough to drive research funding into topics like “AI Safety”. It also is apparently inspiring legal study into the regulation of autonomous systems. It may also have implications for what is called, vaguely, “social science”, though increasingly it seems like nobody really knows what that is.

There is a serious epistemological problem here. Some researchers are trying to predict or forewarn the societal impact of agents that are by assumption beyond their comprehension on the premise that they may come into existence at any moment.

This is fascinating but one has to get a grip.