Digifesto

Category: academia

Responsible participation in complex sociotechnical organizations circa 1977 cc @Aelkus @dj_mosfett

Many extant controversies around technology were documented in 1977 by Langdon Winner in Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought. I would go so far as to say most extant controversies, but I don’t think he does anything having to do with gender, for example.

Consider this discussion of moral education of engineers:

“The problems for moral agency created by the complexity of technical systems cast new light on contemporary calls for more ethically aware scientists and engineers. According to a very common and laudable view, part of the education of persons learning advanced scientific skills ought to be a full comprehension of the social implications of their work. Enlightened professionals should have a solid grasp of ethics relevant to their activities. But, one can ask, what good will it do to nourish this moral sensibility and then place the individual in an organizational situation that mocks the very idea of responsible conduct? To pretend that the whole matter can be settled in the quiet reflections of one’s soul while disregarding the context in which the most powerful opportunities for action are made available is a fundamental misunderstanding of the quality genuine responsibility must have.”

A few thoughts.

First, this reminds me of a conversation @Aelkus @dj_mosfett and I had the other day. The question was: who should take moral responsibility for the failures of sociotechnical organizations (conceived of as corporations running a web service technology, for example).

Second, I’ve been convinced again lately (reminded?) of the importance of context. I’ve been looking into Chaiklin and Lave’s Understanding Practice again, which is largely about how it’s important to take context into account when studying any social system that involves learning. More recently than that I’ve been looking into Nissenbaum’s contextual integrity theory. According to her theory, which is now widely used in the design and legal privacy literature, norms of information flow are justified by the purpose of the context in which they are situated. So, for example, in an ethnographic context those norms of information flow most critical for maintain trusted relationships with ones subjects are most important.

But in a corporate context, where the purpose of ones context is to maximize shareholder value, wouldn’t the norms of information flow favor those who keep the moral failures of their organization shrouded in the complexity of their machinery be perfectly justified in their actions?

I’m not seriously advocating for this view, of course. I’m just asking it rhetorically, as it seems like it’s a potential weakness in contextual integrity theory that it does not endorse the actions of, for example, corporate whistleblowers. Or is it? Are corporate whistleblowers the same as national whistleblowers? Of Wikileaks?

One way around this would be to consider contexts to be nested or overlapping, with ethics contextualize to those “spaces.” So, a corporate whistleblower would be doing something bad for the company, but good for society, assuming that there wasn’t some larger social cost to the loss of confidence in that company. (It occurs to me that in this sort of situation, perhaps threatening internally to blow the whistle unless the problem is solved would be the responsible strategy. As they say,

Making progress with the horns is permissible
Only for the purpose of punishing one’s own city.

)

Anyway, it’s a cool topic to think about, what an information theoretic account of responsibility would look like. That’s tied to autonomy. I bet it’s doable.

cultural values in design

As much as I would like to put aside the problem of technology criticism and focus on my empirical work, I find myself unable to avoid the topic. Today I was discussing work with a friend and collaborator who comes from a ‘critical’ perspective. We were talking about ‘values in design’, a subject that we both care about, despite our different backgrounds.

I suggested that one way to think about values in design is to think of a number of agents and their utility functions. Their utility functions capture their values; the design of an artifact can have greater or less utility for the agents in question. They may intentionally or unintentionally design artifacts that serve some but not others. And so on.

Of course, thinking in terms of ‘utility functions’ is common among engineers, economists, cognitive scientists, rational choice theorists in political science, and elsewhere. It is shunned by the critically trained. My friend and colleague was open minded in his consideration of utility functions, but was more concerned with how cultural values might sneak into or be expressed in design.

I asked him to define a cultural value. We debated the term for some time. We reached a reasonable conclusion.

With such a consensus to work with, we began to talk about how such a concept would be applied. He brought up the example of an algorithm claimed by its creators to be objective. But, he asked, could the algorithm have a bias? Would we not expect that it would express, secretly, cultural values?

I confessed that I aspire to design and implement just such algorithms. I think it would be a fine future if we designed algorithms to fairly and objectively arbitrate our political disputes. We have good reasons to think that an algorithm could be more objective than a system of human bureaucracy. While human decision-makers are limited by the partiality of their perspective, we can build infrastructure that accesses and processes data that are beyond an individual’s comprehension. The challenge is to design the system so that it operates kindly and fairly despite its operations being beyond the scope a single person’s judgment. This will require an abstracted understanding of fairness that is not grounded in the politics of partiality.

Suppose a team of people were to design and implement such a program. On what basis would the critics–and there would inevitably be critics–accuse it of being a biased design with embedded cultural values? Besides the obvious but empty criticism that valuing unbiased results is a cultural value, why wouldn’t the reasoned process of design reduce bias?

We resumed our work peacefully.

ethical data science is statistical data science #dsesummit

I am at the Moore/Sloan Data Science Environment at the Suncadia Resort in Washington. There are amazing trees here. Wow!

So far the coolest thing I’ve seen is a talk on how Dynamic Mode Decomposition, a technique from fluid dynamics, is being applied to data from brains.

And yet, despite all this sweet science, all is not well in paradise. Provocations, source unknown, sting the sensitive hearts of the data scientists here. Something or someone stirs our emotional fluids.

There are two controversies. There is one solution, which is the synthesis of the two parts into a whole.

Herr Doctor Otherwise Anonymous confronted some compatriots and myself in the resort bar with a distressing thought. His work in computational analysis of physical materials–his data science–might be coopted and used for mass surveillance. Powerful businesses might use the tools he creates. Information discovered through these tools may be used to discriminate unfairly against the underprivileged. As teachers, responsible for the future through our students, are we not also responsible for teaching ethics? Should we not be concerned as practitioners; should we not hesitate?

I don’t mind saying that at the time I at my Ballmer Peak of lucidity. Yes, I replied, we should teach our students ethics. But we should not base our ethics in fear! And we should have the humility to see that the moral responsibility is not ours to bear alone. Our role is to develop good tools. Others may use them for good or ill, based on their confidence in our guarantees. Indeed, an ethical choice is only possible when one knows enough to make sound judgment. Only when we know all the variables in play and how they relate to each other can we be sure our moral decisions–perhaps to work for social equality–are valid.

Later, I discover that there is more trouble. The trouble is statistics. There is a matter of professional identity: Who are statisticians? Who are data scientists? Are there enough statisticians in data science? Are the statisticians snubbing the data scientists? Do they think they are holier-than-thou? Are the data scientists merely bad scientists, slinging irresponsible model-fitting code, inviting disaster?

//platform.twitter.com/widgets.js

Attachment to personal identity is the root of all suffering. Put aside all sociological questions of who gets to be called a statistician for a moment. Don’t even think about what branches of mathematics are considered part of a core statistical curriculum. These are historical contingencies with no place in the Absolute.

At the root of this anxiety about what is holy, and what is good science, is that statistical rigor just is the ethics of data science.

Nissenbaum the functionalist

Today in Classics we discussed Helen Nissenbaum’s Privacy in Context.

Most striking to me is that Nissenbaum’s privacy framework, contextual integrity theory, depends critically on a functionalist sociological view. A context is defined by its information norms and violations of those norms are judged according to their (non)accordance with the purposes and values of the context. So, for example, the purposes of an educational institution determine what are appropriate information norms within it, and what departures from those norms constitute privacy violations.

I used to think teleology was dead in the sciences. But recently I learned that it is commonplace in biology and popular in ecology. Today I learned that what amounts to a State Philosopher in the U.S. (Nissenbaum’s framework has been more or less adopted by the FTC) maintains a teleological view of social institutions. Fascinating! Even more fascinating that this philosophy corresponds well enough to American law as to be informative of it.

From a “pure” philosophy perspective (which is I will admit simply a vice of mine), it’s interesting to contrast Nissenbaum with…oh, Horkheimer again. Nissenbaum sees ethical behavior (around privacy at least) as being behavior that is in accord with the purpose of ones context. Morality is given by the system. For Horkheimer, the problem is that the system’s purposes subsume the interests of the individual, who is alone the agent who is able to determine what is right and wrong. Horkheimer is a founder of a Frankfurt school, arguably the intellectual ancestor of progressivism. Nissenbaum grounds her work in Burke and her theory is admittedly conservative. Privacy is violated when people’s expectations of privacy are violated–this is coming from U.S. law–and that means people’s contextual expectations carry more weight than an individual’s free-minded beliefs.

The tension could be resolved when free individuals determine the purpose of the systems they participate in. Indeed, Nissenbaum quotes Burke in his approval of established conventions as being the result of accreted wisdom and rationale of past generations. The system is the way it is because it was chosen. (Or, perhaps, because it survived.)

Since Horkheimer’s objection to “the system” is that he believes instrumentality has run amok, thereby causing the system serve a purpose nobody intended for it, his view is not inconsistent with Nissenbaum’s. Nissenbaum, building on Dworkin, sees contextual legitimacy as depending on some kind of political legitimacy.

The crux of the problem is the question of what information norms comprise the context in which political legitimacy is formed, and what purpose does this context or system serve?

And now for something completely different: Superintelligence and the social sciences

This semester I’ll be co-organizing, with Mahendra Prasad, a seminar on the subject of “Superintelligence and the Social Sciences”.

How I managed to find myself in this role is a bit of a long story. But as I’ve had a longstanding curiosity about this topic, I am glad to be putting energy into the seminar. It’s a great opportunity to get exposure to some of the very interesting work done by MIRI on this subject. It’s also a chance to thoroughly investigate (and critique) Bostrom’s book Superintelligence: Paths, Dangers, and Strategies.

I find the subject matter perplexing because in many ways it forces the very cultural and intellectual clash that I’ve been preoccupied with elsewhere on this blog: the failure of social scientists and engineers to communicate. Or, perhaps, the failure of qualitative researchers and quantitative researchers to communicate. Whatever you want to call it.

Broadly, the question at stake is: what impact will artificial intelligence have on society? This question is already misleading since in the imagination of most people who haven’t been trained in the subject, “artificial intelligence” refers to something of a science fiction scenario, whereas to practitioner, “artificial intelligence” is, basically, just software. Just as the press went wild last year speculating about “algorithms”, by which it meant software, so too is the press excited about artificial intelligence, which is just software.

But the concern that software is responsible for more and more of the activity in the world and that it is in a sense “smarter than us”, and especially the fear that it might become vastly smarter than us (i.e. turning into what Bostrom calls a “superintelligence”), is pervasive enough to drive research funding into topics like “AI Safety”. It also is apparently inspiring legal study into the regulation of autonomous systems. It may also have implications for what is called, vaguely, “social science”, though increasingly it seems like nobody really knows what that is.

There is a serious epistemological problem here. Some researchers are trying to predict or forewarn the societal impact of agents that are by assumption beyond their comprehension on the premise that they may come into existence at any moment.

This is fascinating but one has to get a grip.

One Magisterium: a review (part 1)

I have come upon a remarkable book, titled One Magisterium: How Nature Knows Through Us, by Seán Ó Nualláin, President, University of Ireland, California. It is dedicated “To all working at the edges of society in an uncompromising search for truth and justice.” It’s acknowledgement section opens:

Kenyan middle-distance runners were famous for running like “scared rabbits”: going straight to the head of the field and staying there, come what may. Even more than was the case for my other books, I wrote this like a scared rabbit.”

Ó Nualláin is a recognizable face at UC Berkeley though I think it’s fair to say that most of the faculty and PhD students couldn’t tell you who he is. To a mainstream academic, he is one of the nebulous class of people who show up to events. One glorious loophole of university culture is that the riches of intellectual communion are often made available in open seminars held by people so weary of obscurity that they are happy for any warm body that cares enough to attend. This condition combined with the city of Berkeley’s accommodating attitude towards quacks and vagrants adds flavor to the university’s intellectual character.

There is of course no campus for the University of Ireland, California. Ó Nualláin is a truly independent scholar. Unlike many more unfortunate intellectuals, he has made the brilliant decision to not quit his day job, which is as a musician. A Google inquiry into the man indicates he probably got his PhD from Dublin City University and spent a good deal of time around Stanford’s Symbolic Systems department. (EDIT: Sean has corrected me on the details of his accomplished biography in the comments.)

I got on his mailing lists some time ago because of my interest in the Foundations of Mind conference, which he runs in Berkeley. Later, I was impressed by his aggressive volley of questions when Nick Bostrom spoke at Berkeley (I’ve become familiar with Bostrom’s work through MIRI (formerly SingInst). I’ve spoken to him just a couple times, once at a poster session at the Berkeley Institute of Data Science and once at Katy Huff’s scientific technology practice group, The Hacker Within.

I’m providing these details out of what you might call anthropological interest. At the School of Information I’ve somehow caught the bug of Science and Technology Studies by osmosis. Now I work for Charlotte Cabasse on her ethnographic team, despite believing myself to be a computational social scientist. This qualitative work is a wonderful excuse to write about ones experiences.

My perceptions of Ó Nualláin are relevant, then, because they situate the author of One Magisterium as an outsider to the academic mainstream at Berkeley. This outsider status comes through quite heavily in the book, starting from the Acknowledgments section (which recognizes all the service staff at the bars and coffee shops where he wrote the book) and running as a regular theme throughout. Discontent with and rejection from academia-as-usual are articulated in sublimated form as harsh critique of the academic institution. Ó Nualláin is engaged in an “uncompromising search for truth and justice,” and the university as it exists today demands too many compromises.

Magisterium is a Catholic term for a teaching authority. One Magisterium refers to the book’s ambition of pointing to a singular teaching authority, a new one heretofore unrecognized by other teaching authorities such as mainstream universities. Hence the book is an attack on other sources of intellectual authority. An example passage:

The devastating news for any reader venturing a toe into the stormy waters of this book is that its writer’s view is that we may never be able to dignify the moral, epistemological and political miasma of the early twenty-first century with terms like “crisis” for which the appropriate solution is of course a “paradigm shift”. It may simply be a set of hideously interconnected messes; epistemological and administrative in the academy, institutional and moral in the greater society. As a consequence, the landscape of possible “solutions” may seem so unconstrained that the wisdom of Joe the barman may be seen to equal that of any series of tomes, no matter how well-researched.

This book is above all an attempt to unify the plurality of discourses — scientific, religious, moral, aesthetic, and so on — that obtain at the start of the third millenium.

An anthropologist of science might observe that this criticality-of-everything, coupled with the claim to have a unifying theory of everything, is a surefire way to get ignored by the academy. The incentive structure of the academy requires specialization and a political balance of ideas. If somebody were to show up with the right idea, it would discredit a lot of otherwise important people and put others out of a job.

The problem, or one of them (there are many mentioned in the first chapter of One Magisterium, titled “The Trouble with Everything”), is that Ó Nualláin is right. At least as far as I can tell at this point. It is not an easy book to read; it is not structured linearly so much as (I imagine, not knowing what I’m talking about) like complex Irish dancing music, with motifs repeated and encircling themselves like a double helix or perhaps some more complex structure. Threaded together are topics from Quantum Mechanics, an analysis of the anthropic principle, a critique of Dawkins’ atheism and a positioning of the relevance of Vedanta theology to understanding physical reality, and an account of the proper role of the arts in society. I suspect that the book is meant to unfold on ones psychology slowly, resulting in ones adoption of what Ó Nualláin calls bionoetics, the new united worldview that is the alleged solution to everything.

A key principle of bionoetics is the recognition of what Ó Nualláin calls the “noetic” level of description, which is distinct from the “cognitive” third-person stance in that it is compressed in a way that makes it relevant to action in any particular domain of inquiry. Most of what he describes as “noetic” I read as “phenomenological”. I wonder if Ó Nualláin has read Merleau-Ponty–he uses the Husserlian critique of “psychologism” extensively.

I think it’s immaterial whether “noetic” is an appropriate neologism for this blending of the first-personal experience into the magisterium. Indeed, there is something comforting to a hard-headed scientist about Ó Nualláin’s views: contrary to the contemporary anthropological view, this first-personal knowledge has no place in academic science; it’s place is art. Having been in enough seminars at the School of Information where anthropologists lament not being taken seriously as producing knowledge comparable to that of the Scientists, and being one who appreciates the value of Art without needing it to be Science, I find something intuitively appealing about this view. Nevertheless, one wonders if the epistemic foundation of Ó Nualláin’s critique of the academy is grounded in scientific inquiry or his own and others first-personal noetic experiences coupled with observations of who is “successful” in scientific fields.

Just one chapter into One Magisterium, I have to say I’m impressed with it in a very specific way. Some of us learn about the world with a synthetic mind, searching for the truth with as few constraints on ones inquiry as possible. Indeed, that’s how I wound up at as nebulous place as the School of Information at Berkeley. As one conducts the search, one finds oneself increasingly isolated. Some truths may never be spoken, and it’s never appropriate to say all the truths at once. This is especially true in an academic context, where it is paramount for the reputation of the institution that everyone avoid intellectual embarrassment whenever possible. So we make compromises, contenting ourselves with minute and politically palatable expertise.

I am deeply impressed that Ó Nualláin has decided to fuck all and tell it like it is.

Fascinated by Vijay Narayanan’s talk at #DataEDGE

As I write this I’m watching Vijay Narayanan’s, Director of Algorithms and Data Science Solutions at Microsoft, talk at the DataEDGE conference at UC Berkeley.

The talk is about “The Data Science Economy.” It began with a history of the evolution of the human centralized nervous system. He then went on to show the centralizing trend of the data economy. Data collection will be become more mobile, data processing will be done in the cloud. This data will be sifted by software and used to power a marketplace of services, which ultimately deliver intelligence to their users.

It was wonderful to see somebody so in the know reaffirming what has been a suspicion I’ve had since starting graduate school but have found little support for in the academic setting. The suspicion is that what’s needed to accurately model the data science economy is a synthesis of cognitive science and economics that can show the comparative market value and competitiveness of different services.

This is not out of the mainline of information technology, management science, computer science, and other associated disciplines that have been at the nexus of business and academia for 70 years. It’s an intellectual tradition that’s rooted in the 1940’s cybernetics vision of Norbert Wiener and was going strong in the social sciences as late as Beniger‘s The Control Revolution, which, like Narayanan, draws an explicit connection between information processing in the brain and information processing in the microprocessor–notably while acknowledging the intermediary step of bureaucracy as a large-scale information processing system.

There’s significant cross-pollination between engineering, economics, computer science, and cognitive psychology. I’ve read papers from, say, the Education field in the late 80’s and early 90’s that refers to this collectively as “the dominant paradigm”. At UC Berkeley today, it’s fascinating to see a departmental politics play out over ‘data science’ that echoes some of these concerns that a powerful alliance of ideas are getting mobilized by industry and governments while other disciplines are struggling to find relevance.

It’s possible that these specialized disciplinary discourses are important for the cultivation of thought that is important for its insight despite being fundamentally impractical. I’m coming to a different view: that maybe the ‘dominant paradigm’ is dominant because it is scientifically true, and that other disciplinary orientations are suffering because they are based on unsound theory. If disciplines that are ‘dominated’ by another paradigm are floundering because they are, to put it simply, wrong, then that is a very elegant explanation for what’s going on.

The ramification of this is that what’s needed is not a number of alternatives to ‘the dominant paradignm’. What’s needed is that scholars double down on the dominant paradigm and learn how to express in its logic the complexities and nuances that the other disciplines have been designed to capture. What we can hope for, in terms of intellectual continuity, is the preservation of what’s best of older ideas in a creative synthesis with the foundational principles of computer science and mathematical biology.

going post-ideology

I’ve spent a lot of my intellectual life in the grips of ideology.

I’m glad to be getting past all of that. That’s one reason why I am so happy to be part of Glass Bead Labs.

Glass Bead Labs

There are a lot of people who believe that it’s impossible to get beyond ideology. They believe that all knowledge is political and nothing can be known with true clarity.

I’m excited to have an opportunity to try to prove them wrong.

data science and the university

This is by now a familiar line of thought but it has just now struck me with clarity I wanted to jot down.

  1. Code is law, so the full weight of human inquiry should be brought to bear on software system design.
  2. (1) has been understood by “hackers” for years but has only recently been accepted by academics.
  3. (2) is due to disciplinary restrictions within the academy.
  4. (3) is due to the incentive structure of the academy.
  5. Since there are incentive structures for software development that are not available for subjects whose primary research project is writing, the institutional conditions that are best able to support software work and academic writing work are different.
  6. Software is a more precise and efficious way of communicating ideas than writing because its interpretation is guaranteed by programming language semantics.
  7. Because of (6), there is selective pressure to making software the lingua franca of scholarly work.
  8. (7) is inducing a cross-disciplinary paradigm shift in methods.
  9. (9) may induce a paradigm shift in theoretical content, or it may result in science whose contents are tailored to the efficient execution of adaptive systems. (This is not to say that such systems are necessarily atheoretic, just that they are subject to different epistemic considerations).
  10. Institutions are slow to change. That’s what makes them institutions.
  11. By (5), (7), and (9), the role of universities as the center of research is being threatened existentially.
  12. But by (1), the myriad intellectual threads currently housed in universities are necessary for software system design, or are at least potentially important.
  13. With (11) and (12), a priority is figuring out how to manage a transition to software-based scholarship without information loss.

a brief comment on feminist epistemology

One funny thing about having a blog is that I can tell when people are interested in particular posts through the site analytics. To my surprise, this post about Donna Haraway has been getting an increasing number of hits each month since I posted it. That is an indication that it has struck a chord, since steady exogenous growth like that is actually quite rare.

It is just possible that this means that people interested in feminist epistemology have been reading my blog lately. They probably have correctly guessed that I have not been the biggest fan of feminist epistemology because of concerns about bias.

But I’d like to take the opportunity to say that my friend Rachel McKinney has been recommending I read Elizabeth Anderson‘s stuff if I want to really get to know this body of theory. Since Rachel is an actual philosopher and I am an amateur who blogs about it on weekends, I respect her opinion on this a great deal.

So today I started reading through Anderson’s Stanford Encyclopedia of Philosophy article on Feminist Epistemology and I have to say I think it’s very good. I like her treatment of the situated knower. It’s also nice to learn that there are alternative feminist epistemologies to certain standpoint theories that I think are troublesome. In particular, it turns out that those standpoint theories are now considered by feminist philosophers to from a brief period in the 80’s that they’ve moved past already! Now subaltern standpoints are considered privileged in terms of discovery more than privileged in terms of justification.

This position is certainly easier to reconcile with computational methods. For example, it’s in a sense just mathematically mathematically correct if you think about it in terms of information gain from a sample. This principle appears to have been rediscovered in a way recently by the equity-in-data-science people when people talk about potential classifier error.

I’ve got some qualms about the articulation of this learning principle in the absence of a particular inquiry or decision problem because I think there’s still a subtle shift in the argumentation from logos to ethos embedded in there (I’ve been seeing things through the lens of Aristotelian rhetoric lately and it’s been surprisingly illuminating). I’m on the lookout for a concrete application of where this could apply in a technical domain, as opposed to as an articulation of a political affinity or anxiety in the language of algorithms. I’d be grateful for links in the comments.

Edit:

Wait, maybe I already built one. I am not sure if that really counts.

Follow

Get every new post delivered to your Inbox.

Join 1,087 other followers