Digifesto

Category: academia

Virtual innovation clusters and tidal spillovers

I’ve recently begun a new project with Camilla Hrdy about government procurement as a local innovation incentive. Serendipitously, this has exposed me to literature around innovation clusters and spillover effects, as in this Fallah and Ibrahim literature review. This has been an “aha!” moment.

Innovation clusters are, in the literature, geographic places like Silicon Valley, Cambridge, Massachussetts, and other urban areas where there is a lot of R&D investment. Received wisdom is that these areas wind up driving the local economies in those areas by spillover effects, a market externality in which those beyond the intended beneficieries of an innovation benefit from it, normally through an informal knowledge transfer.

To economists in the 90’s, this was a significant and exceptional property of certain geographic places. To the digital native acclimated to Free Culture, this is a way of life. Spillovers are defined in terms of the intended boundaries of the recipients of innovative information. However, when there is no intended boundary you still get the spillover effect on innovation itself (however incomprehensible the incentives are to the 90’s economist). The Internet provides a virtual proximity that turns it into an innovation cluster. Advances in human computer interaction further enable this virtual proximity. We might say that Github, for example, is an innovation cluster with a higher degree of virtual proximity between its innovators within the larger virtual innovation cluster that includes SourceForge, Bitbucket, and everything else. (Considering software engineering as the particular case here.)

By binding together other innovation clusters, this virtual proximity leads to the innovation explosion we’ve seen in the past 10 or so years. “Everything changes so fast.” It’s true!

Outside of the software environment, we can point to other virtual innovation clusters such as Weird Twitter, where virtual proximity and spillover effects are used to innovate rapidly in humor.

The drive to open access academic research is due in part to an understanding of these spillover effects. You increase impact by encouraging spillover. I.e., you try to make waves. Academic research becomes more like speciality journalism in the sense that you try to break a story globally, not just to a particular academic community. The speed of innovation in such a dynamic environment is bewildering and perhaps the university tenure-based incentive system is not well designed to handle it, but nevertheless these are the times.

Jack Burris at the Berkeley D-Lab likes to say that the D-Lab is designed to support ‘collisions’ between researchers in differnet fields. “Spillovers” might be a term with more literature behind it. Indeed, interdisciplinarity needs to start with collisions or spillovers because that is what creates mixing between siloed innovation. I’ve heard that Soo and Carson’s paper about Clark Kerr as an industrial organizer explain some of the idiosyncracies of Berkeley in particular as an accumulation of silos.

Which explains the D-Lab’s current agenda as a mix of open source evangelism, reproducible research technology adoption, technical skills training for social scientists, and eshewer of disciplinary distinctions. If Berkeley’s success as a research institution depends on its being an effective innovation cluster, even within the larger innovation cluster that is the coast of Northern California, then it will need to increase the virtual proximity of its constituent innovators. Furthermore, this will expose non-local actors to spillovers from Berkeley, and perhaps Berkeley from spillovers from other institutions. This is of course a shift in degree, not kind, from the way the academic system already works in the economy. But what’s new is the use of disruptive infrastructure to accelerate the process.

This would all be wonderful if it were not also tilting towards a crisis, since its unclear how the human beings in the system are meant to adapt to these rapid changes. What is scholarship when the body of literature available on a particular topic is no longer strictly filtered by a hierarchical community of practice but rather is available to anybody with the (admittedly often specialized, but increasingly available) literacy? Is expertise just a matter of having the leisure and discretion to retweet the latest and greatest? Or do you make it by positioning yourself skillfully at the right point of the long tail? Or is this once-glorified social role of intellectual labor now just a perfunctory routine we can replace with ranks of amateurs?

To be a good expert is to be a good node. Not a central node, not a loud node, just a good node. This is humbling for experts, but these are the times.

How to tell the story about why stories don’t matter

I’m thinking of taking this seminar because I’m running into the problem it addresses: how do you pick a theoretical lens for academic writing?

This is related to a conversation I’ve found myself in repeatedly over the past weeks. A friend who studied Rhetoric insists that the narrative and framing of history is more important than the events and facts. A philosopher friend minimizes the historical impact of increased volumes of “raw footage”, because ultimately it’s the framing that will matter.

Yesterday I had the privilege of attending Techraking III, a conference put on by the Center for Investigative Reporting with the generous support and presence of Google. It was a conference about data journalism. The popular sentiment within the conference was that data doesn’t matter unless it’s told with a story, a framing.

I find this troubling because while I pay attention to this world and the way it frames itself, I also read the tech biz press carefully, and it tells a very different narrative. Data is worth billions of dollars. Even data exhaust, the data fumes that come from your information processing factory, can be recycled into valuable insights. Data is there to be mined for value. And if you are particularly genius at it, you can build an expert system that acts on the data without needing interpretation. You build an information processing machine that acts according to mechanical principles that approximate statistical laws, and these machines are powerful.

As social scientists realize they need to be data scientists, and journalists realize they need to be data journalists, there seems to be in practice a tacit admission of the data-driven counter-narrative. This tacit approval is contradicted by the explicit rhetoric that glorifies interpretation and narrative over data.

This is an interesting kind of contradiction, as it takes place as much in the psyche of the data scientist as anywhere else. It’s like the mouth doesn’t know what the hand is doing. This is entirely possible since our minds aren’t actually that coherent to start with. But it does make the process of collaboratively interacting with others in the data science field super complicated.

All this comes to a head when the data we are talking about isn’t something simple like sensor data about the weather but rather is something like text, which is both data and narrative simulatenously. We intuitively see the potential of treating narrative as something to be treated mechanically, statistically. We certainly see the effects of this in our daily lives. This is what the most powerful organizations in the world do all the time.

The irony is that the interpretivists, who are so quick to deny technological determinism, are the ones who are most vulnerable to being blindsided by “what technology wants.” Humanities departments are being slowly phased out, their funding cut. Why? Do they have an explanation for this? If interpetation/framing were as efficacious as they claim, they would be philosopher kings. So their sociopolitical situation contradicts their own rhetoric and ideology. Meanwhile, journalists who would like to believe that it’s the story that matters are, for the sake of job security, being corralled into classes to learn CSS, the programming language that determines, mechanically, the logic of formatting and presentation.

Sadly, neither mechanists nor interpretivists have much of an interest in engaging this contradiction. This is because interpretivists chase funding by reinforcing the narrative that they are critically important, and the work of mechanists speaks for itself in corporate accounting (an uninterpretive field) without explanation. So this contradiction falls mainly into the laps of those coordinating interaction between tribes. Managers who need to communicate between engineering and marketing. University administrators who have to juggle the interests of humanities and sciences. The leadership of investigative reporting non-profits who need to justify themselves to savvy foundations and who are removed enough from particular skillsets to be flexible.

Mechnanized information processing is becoming the new epistemic center. (Forgive me:) the Google supercomputer approximating statistics has replaced Kantian trancendental reason as the grounds for bourgious understanding of the world. This is threatening, of course, to the plurality of perspectives that do not themselves internalize the logic of machine learning. Where machine intelligence has succeeded, then, it has been by juggling this multitude of perspectives (and frames) through automated, data-driven processes. Machine intelligence is not comprehensible to lay interpretivism. Interestingly, lay interpetivism isn’t comprehensible yet to machine intelligence–natural language processing has not yet advanced so far. It treats our communications like we treat ants in an ant farm: a blooming buzzing confusion of arbitrary quanta, fascinatingly complex for its patterns that we cannot see. And when it makes mistakes–and it does often–we feel its effects as a structural force beyond our control. A change in the user interface of Facebook that suddenly exposes drunken college photos to employers and abusive ex-lovers.

What theoretical frame is adequate to tell this story, the story that’s determining the shape of knowledge today? For Lyotard, the postmodern condition is one in which metanarratives about the organization of knowledge collapse and leave only politics, power, and language games. The postmodern condition has gotten us into our present condition: industrial machine intelligence presiding over interpretivists battling in paralogical language games. When the interpretivists strike back, it looks like hipsters or Weird Twitter–paralogy as a subculture of resistance that can’t even acknowledge its own role as resistance for fear of recuperation.

We need a new metanarrative to get out of this mess. But what kind of theory could possibly satisfy all these constituents?

Dissertron build notes

I’m going to start building the Dissertron now. These are my notes.

  • I’m going with Hyde as a static site generator on Nick Doty‘s recommendation. It appears to be tracking Jekyll in terms of features, but squares better with my Python/Django background (it uses Jinja2 templates in its current, possibly-1.0-but-under-development version). Meanwhile, at Berkeley we seem to be investing a lot in Python as the language of scientific computing. If scientists skills should be transferrable to their publication tool, this seems like the way to go.
  • Documentation for Hyde is a bit scattered. This first steps guide is sort of helpful, and then there are these docs hosted on Github. As mentioned, they’ve moved away from Django templates to Jinja2, which is similar but less idiosyncratic. They refer you to the Jinja2 docs here for templating.
  • Just trying to make a Hello World type site, I ran into an issue with Markdown rendering. I’ve filed an issue with the project, and will use it as a test of the community’s responsiveness. Since Hyde is competing with a lot of other Python static site generators, it’s kind of nice to bump into this kind of thing early.
  • Got this response from the creator of Hyde in less than 3 hours. Problem was with my Jinja2 fu (which is weak at the moment)–turns out I have a lot to learn about Whitespace Control. Super positive community experience. I’ll stick with Hyde.
  • “Hello World” intact and framework chosen, my next step is to convert part 2 of my Weird Twitter work to Markdown and use Hyde’s tools to give it some decent layout. If I can make some headway on the citation formating and management in the process, so much the better.

Complications in Scholarly Hypertext

I’ve got a lot of questions about on-line academic publishing. A lot of this comes from career anxiety: I am not a very good academic because I don’t know how to write for academic conferences and journals. But I’m also coming from an industry that is totally eating the academy’s lunch when it comes to innovating and disseminating information. People within academia are increasingly feeling the disruptive pressure of alternative publication venues and formats, and moreover seeing the need for alternatives for the sake of the intellectual integrity of the whole enterprise. Open science, open data, reproducible research–these are keywords for new practices that are meant to restore confidence in science itself, in part by making it more accessible.

One manifestation of this trend is the transition of academic group blogs into academic quasi-journals or on-line magazines. I don’t know how common this is, but I recently had a fantastic experience of this writing for Ethnography Matters. Instead of going through an opaque and problematic academic review process, I worked with editor Rachelle Annechino to craft a piece about Weird Twitter that was appropriate for the edition and audience.

During the editing process, I tried to unload everything I had to say about Weird Twitter so that I could at last get past it. I don’t consider myself an ethnographer and I don’t want to write my dissertation of Weird Twitter. But Rachelle encouraged me to split off the pseudo-ethnographic section into a separate post, since the first half was more consistent with the Virtual Identity edition. (Interesting how the word “edition”, which has come to mean “all the copies of a specific issue of a newspaper”, in the digital context returns to its etymological roots as simply something published or produced (past participle)).

Which means I’m still left with the (impossible) task of doing an ethnography (something I’m not very well trained for) about Weird Twitter (which might not exist). Since I don’t want to violate the contextual integrity of Weird Twitter more than I already have, I’m reluctant to write about it in a non-Web-based medium.

This carries with it a number of challenges, not least of which is the reception on Twitter itself.

What my thesaurus and I do in the privacy of our home is our business and anyway entirely legal in the state of California. But I’ve come to realize that forced disclosure is an occupational hazard I need to learn to accept. What these remarks point to, though, is the tension between access to documents as data and access to documents as sources of information. The latter, as we know from Claude Shannon, requires an interpreter who can decode the language in which the information is written.

Expert language is a prison for knowledge and understanding. A prison for intellectually significant relationships. It is time to move beyond the institutional practices of triviledge

– Taylor and Saarinen, 1994, quoted in Kolb, 1997

Is it possible to get away from expert language in scholarly writing? Naively, one could ask experts to write everything “in plain English.” But that doesn’t do language justice: often (though certainly not always) new words express new concepts. Using a technical vocabulary fluently requires not just a thesaurus, but an actual understanding of the technical domain. I’ve been through the phase myself in which I thought I knew everything and so blamed anything written opaquely to me on obscurantism. Now I’m humbler and harder to understand.

What is so promising about hypertext as a scholarly medium is that it offers a solution to this problem. Wikipedia is successful because it directly links jargon to further content that explains it. Those with the necessary expertise to read something can get the intended meaning out of an article, and those that are confused by terminology can romp around learning things. Maybe they will come back to the original article later with an expanded understanding.

xkcd: The Problem with Wikipedia

Hypertext and hypertext-based reading practices are valuable for making ones work open and accessible. But it’s not clear how to combine these with scholarly conventions on referencing and citations. Just to take Ethnography Matters as an example, for my article I used in-line linking and where I got to it parenthetical bibliographic information. Contrast with Heather Ford’s article in the same edition, which has no links and a section at the end for academic references. The APA has rules for citing web resources within an academic paper. What’s not clear is how directly linking citations within an academic hypertext document should work.

One reason for lack of consensus around this issue is that citation formatting is a pain in the butt. For off-line documents, word processing software has provided myriad tools for streamlining bibliographic work. But for publishing academic work on the web, we write in markup languages or WYSIWIG editors.

Since standards on the web tend to evolve through “rough consensus and running code”, I expect we’ll see a standard for this sort of thing emerge when somebody builds a tool that makes it easy for them to follow. This leads me back to fantasizing about the Dissertron. This is a bit disturbing. As much as I’d like to get away from studying Weird Twitter, I see now that a Weird Twitter ethnography is the perfect test-bed for such a tool precisely because of the hostile scrutiny it would attract.

Planning the Dissertron

In my PhD program, I’ve recently finished my coursework and am meant to start focusing on research for my dissertation. Maybe because of the hubbub around open access research, maybe because I still see myself as a ‘hacker’, maybe because it’s somehow recursively tied into my research agenda, or because I’m an open source dogmatic, I’ve been fantasizing about the tools and technology of publication that I want to work on my dissertation with.

For this project, which I call the Dissertron, I’ve got a loose bundle of requirements feature creeping its way into outer space:

  1. Incremental publishing of research and scholarship results openly to the web.
  2. Version control.
  3. Mathematical rendering a la LaTeX.
  4. Code highlighting a la the hacker blogs.
  5. In browser rendering of data visualizations with d3, where appropriate.
  6. Site is statically generated from elements on the file system, wherever possible.
  7. Machine readable metadata on the logical structure of the dissertation argument, which gets translated into static site navigation elements.
  8. Easily generated glossary with links for looking up difficult terms in-line (or maybe in-margin)
  9. A citation system that takes advantage of hyperlinking between resources wherever possible.
  10. Somehow, enable commenting. But more along the lines of marginalia comments (comments on particular lines or fragments of text) rather than blog comments. “Blog” style comments should be facilitated as notes on separately hosted dissertrons, or maybe a dissertron hub that aggregates and coordinates pollination of content between dissertrons.

This is a lot, and arguably just a huge distraction from working on my dissertation. However, it seems like this or something like it is a necessary next step in the advance of science and I don’t see how I really have much choice in the matter.

Unfortunately, I’m traveling, so I’m going to miss the PLOS workshop on Markdown for Science tomorrow. That’s really too bad, because Scholarly Markdown would get me maybe 50% of the way to what I want.

Right now the best tool chain I can imagine for this involves Scholarly Markdown, run using Pandoc, which I just now figured out is developed by a philosophy professor at Berkeley. Backing it by a Git repository would allow for incremental changes and version control.

Static site generation and hosting is a bit trickier. I feel like GitHub’s support of Jekyll make it a compelling choice, but hacking it to make it fit into the academic frame I’m thinking in might be more trouble than its worth. While it’s a bit of an oversimplification to say this, my impression is that at my university at least there is a growing movement to adopt Python as the programming language of choice for scientific computing. The exceptions seem to be people in the Computer Science department that are backing Scala.

(I like both languages and so can’t complain, except that it makes it harder to do interdisciplinary research if there is a technical barrier in their toolsets. As more of scientific research becomes automated, it is bound to get more crucial that scientific processes (broadly speaking) inter-operate. I’m incidentally excited to be working on these problems this summer for Berkeley’s new Social Science Data Lab. A lot of interesting architectural design is being masterminded by Aaron Culich, who manages the EECS department’s computing infrastructure. I’ve been meaning to blog about our last meeting for a while…but I digress)

Problem is, neither Python or Scala is Ruby, and Ruby is currently leading the game (in my estimate, somebody tell me if I’m wrong) in flexible and sexy smooth usable web design. And then there’s JavaScript, improbably leaking into the back end of the software stack after overflowing the client side.

So for the aspiring open access indie web hipster hacker science self-publisher, it’s hard to navigate the technical terrain. I’m tempted to string together my own rig depending mostly on Pandoc, but even that’s written in Haskell.

These implementation-level problems suggest that the problem needs to be pushed up a level of abstraction to the question of API and syntax standards around scientific web publishing. Scholarly Markdown can be a standard, hopefully with multiple implementations. Maybe there needs to be a standard around web citations as well (since in an open access world, we don’t need the same level of indirection between a document and the works it cites. Like blog posts, web publications can link to the content it derives from directly.)

POSSE homework: how to contribute to FOSS without coding

One of the assignments for the POSSE workshop is the question of how to contribute to FOSS when you aren’t a coder.

I find this an especially interesting topic because I think there’s a broader political significance to FOSS, but those that see FOSS as merely the domain of esoteric engineers can sometimes be a little freaked out by this idea. It also involves broader theoretical questions about whether or how open source jives with participatory design.

In fact, they have compiled a list of lists of ways to contribute to FOSS without coding: this, this, this, and this are provided in the POSSE syllabus.

Turning our attention from the question in the abstract, we’re meant to think about it in the context of our particular practices.

For our humanitarian FOSS project of choice, how are we interested in contributing? I’m fairly focused in my interests on open source participation these days: I’m very interested in the problem of community metrics and especially how innovation happens and diffuses within these communities. I would like to be able to build a system for evaluating that kind of thing that can be applied broadly to many projects. Ideally, it could do things like identify talented participants across multiple projects, or suggest interventions for making projects work better.

It’s an ambitious research project, but one for which there is plenty of data to investigate from the open source communities themselves.

What about teaching a course on such a thing? I anticipate that my students are likely to be interested in design as well as positioning their own projects within the larger open source ecosystem. Some of the people who I hope will take the class have been working on FuturePress, an open source e-book reading platform. As they grow the project and build the organization around it, they will want to be working with constituent technologies and devising a business model around their work. How can a course on Open Collaboration and Peer Production support that?

These concerns touch on so many issues outside of the consideration of software engineering narrowly (including industrial organization, communication, social network theory…) that it’s daunting to try to fit it all into one syllabus. But we’ve been working on one that has a significant hands-on component as well. Really I think the most valuable skill in the FOSS world is having the chutzpah to approach a digital community, propose what you are thinking, and take the criticism or responsibility that comes with that.

What concrete contribution a student uses to channel that energy should…well, I feel like it should be up to them. But is that enough direction? Maybe I’m not thinking concretely enough for this assignment myself.

Ascendency and overhead in networked ecosystems

Ulanowicz (2000) proposes in information-theoretic terms several metrics for ecosystem health, where one models an ecosystem as a for example a trophic network. Principal among them ascendancy , which is a measure of the extent to which energy flows in the system are predictably structured weighted by the total energy of the system. He believes that systems tend towards greater ascendancy in expectation, and that this is predictive of ecological ‘succession’ (and to some extent ecological fitness). On the other hand, overhead, which is unpredictability (perhaps, inefficiency) in energy flows (“free energy”?), are important for the system’s resiliency towards external shocks.
ascendency
At least in the papers I’ve read so far, Ulanowicz is not mathematically specific about the mechanism that leads to greater ascendancy, though he sketches some explanations. Autocatalytic cycles within the network reinforce their own positive perturbations and mutations, drawing in resources from external sources, crowding out and competing with them. These cycles become agents in themselves, exerting what Ulanwicz suggests is Aristotelian final or formal causal power on the lower level components. In this way, freely floating energy is drawn into structures of increasing magnificence and complexity.

I’m reminded on Bataille’s The Accursed Share, in which he attempts to account for societal differences and the arc of human history through the use of its excess energy. “The sexual act is in time what the tiger is in space,” he says, insightfully. The tiger, as an apex predator, is flame that clings brilliantly to the less glamorous ecosystem that supports it. That is why we adore them. And yet, their existence is fragile, as it depends on both the efficiency and stability of the rest of its network. When its environment is disturbed, it is the first to suffer.
space tiger
Ulanowicz cites himself suggesting that a similar framework could be used to analyze computer networks. I have not read his account yet, though I anticipate several difficulties. He suggests that data flows in a computer network are analogous to energy flows within an ecosystem. That has intuitive appeal, but obscures the fact that some data is more valuable than others. A better analogy might be money as a substitute for energy. Or maybe there is a way to reduce both to a common currency, at least for modeling purposes.

Econophysics has been gaining steam, albeit controversially. Without knowing anything about it but based just on statistical hunches, I suspect that this comes down to using more complex models on the super duper complex phenomenon of the economy, and demonstrating their success there. In other words, I’m just guessing that the success of econophysics modeling is due to the greater degrees of freedom in the physics models compared to non-dynamic, structural equilibrium models. However, since ecology models the evolutionary dynamics of multiple competing agents (and systems of those agents), its possible that those models could capture quite a bit of what’s really going on and even be a source of strategic insight.

Indeed, economics already has a sense of stable versus unstable equilibria that resonate with the idea of stability of ecological succession. These ideas translate into game theoretic analysis as well. As we do more work with Strategic Bayesian Networks or other constructs to model equilibrium strategies in a networked, multi-agent system, I wonder if we can reproduce Ulanowicz’s results and use his ideas about ascendancy (which, I’ve got to say, are extraordinary and profound) to provide insight into the information economy.

I think that will require translating the ecosystem modeling into Judea Pearl’s framework for causal reasoning. Having been indoctrinated in Pearl’s framework in much of my training, I believe that it is general enough to subsume Ulanowicz’s results. But I have some doubt. In some of his later writings Ulanowicz refers explicitly to a “Hegelian dialectic” between order and disorder as a consequence of some of his theories, and between that and his insistence on his departure from mechanistic thinking over the course of his long career, I am worried that he may have transcended what it’s possible to do even with the modeling power of Bayesian networks. The question is: what then? It may be that once one’s work sublimates beyond our ability to model explicitly and intervene strategically, it becomes irrelevant. (I get the sense that in academia, Ulanwicz’s scientific philosophizing is a privilege reserved for someone tenured who late in their career is free to make his peace with the world in their own way) But reading his papers is so exhilarating to me. I’ve had no prior exposure to ecology before this, so his papers are packed with fresh ideas. So while I don’t know how to justify it to any of my mentors or colleagues, I think I just have to keep diving into it when I can, on the side.

deep thoughts by jack handy

Information transfer just is the coming-into-dependence of two variables, which under the many worlds interpretation of quantum mechanics means the entanglement of the “worlds” of each variable (and, by extension, the networks of causally related variables of which they are a part). Information exchange collapses possibilities.
This holds up whether you take a subjectivist view of reality (and probability–Bayesian probability properly speaking) or an objectivist view. At their (dialectical?) limit, the two “irreconcilable” paradigms converge on a monist metaphysics that is absolutely physical and also ideal. (This was recognized by Hegel, who was way ahead of the game in a lot of ways.) It is the ideality of nature that allows it to be mathematized, though its important to note that mathematization does not exclude engagement with nature through other modalities, e.g. the emotional, the narrative, etc.

This means that characterizing the evolution of networks of information exchange by their physical properties (limits of information capacity of channels, etc.) is something to be embraced to better understand their impact on e.g. socially constructed reality, emic identity construction, etc. What the mathematics provide is a representation of what remains after so many diverse worlds are collapsed.

A similar result, representing a broad consensus, might be attained dialectically, specifically through actual dialog. Whereas the mathematical accounting is likely to lead to reduction to latent variables that may not coincide with the lived experience of participants, a dialectical approach is more likely to result in a synthesis of perspectives at a higher level of abstraction. (Only a confrontation with nature as the embodiment of unconscious constraints is likely to force us to confront latent mechanisms.)

Whether or not such dialectical synthesis will result in a singular convergent truth is unknown, with various ideologies taking positions on the matter as methodological assumptions. Haraway’s feminist epistemology, eschewing rational consensus in favor of interperspectival translation, rejects a convergent (scientific, and she would say masculine) truth. But does this stand up to the simple objection that Haraway’s own claims about truth and method transcend individual perspective, making he guilty of performative contradiction?

Perhaps a deeper problem with the consensus view of truth, which I heard once from David Weinberger, is that the structure of debate may have fractal complexity. The fractal pluralectic can fray into infinite and infinitesimal disagreement at its borders. I’ve come around to agreeing with this view, uncomfortable as it is. However, within the fractal pluralectic we can still locate a convergent perspective based on the network topology of information flow. Some parts of the network are more central and brighter than others.

A critical question is to what extent the darkness and confusion in the dissonant periphery can be included within the perspective of the central, convergent parts of the network. Is there necessarily a Shadow? Without the noise, can there be a signal?

Bay Area Rationalists

There is an interesting thing happening. Let me just try to lay down some facts.

There are a number of organizations in the Bay Area right now up to related things.

  • Machine Intelligence Research Institute (MIRI). Researches the implications of machine intelligence on the world, especially the possibility of super-human general intelligences. Recently changed their name from the Singularity Institute due to the meaninglessness of the term Singularity. I interviewed their Executive Director (CEO?), Luke Meuhlhauser, a while back. (I followed up on some of the reasoning there with him here).
  • Center for Applied Rationality (CFAR). Runs workshops training people in rationality, applying cognitive science to life choices. Trying to transition from appearing to pitch a “world-view” to teaching a “martial art” (I’ve sat in on a couple of their meetings). They aim to grow out a large network of people practicing these skills, because they think it will make the world a better place.
  • Leverage Research. A think-tank with an elaborate plan to save the world. Their research puts a lot of emphasis on how to design and market ideologies. I’ve been told that they recently moved to the Bay Area to be closer to CFAR.

Some things seem to connect these groups. First, socially, they all seem to know each other (I just went to a party where a lot of members of each group were represented.) Second, the organizations seem to get the majority of their funding from roughly the same people–Peter Thiel, Luke Nosek, and Jaan Tallinn, all successful tech entrepreneurs turned investors with interest in stuff like transhumanism, the Singularity, and advancing rationality in society. They seem to be employing a considerable number of people to perform research on topics normally ignored in academia and spread an ideology and/or set of epistemic practices. Third, there seems to be a general social affiliation with LessWrong.com; I gather a lot of the members of this community originally networked on that site.

There’s a lot that’s interesting about what’s going on here. A network of startups, research institutions, and training/networking organizations is forming around a cluster of ideas: the psychological and technical advancement of humanity, being smarter, making machines smarter, being rational or making machines to be rational for us. It is as far as I can tell largely off the radar of “mainstream” academic thinking. As a network, it seems concerned with growing to gather into itself effective and connected people. But it’s not drawing from many established bases of effective and connected people (the academic establishment, the government establishment, the finance establishment, “old boys networks” per se, etc.) but rather is growing its own base of enthusiasts.

I’ve had a lot of conversations with people in this community now. Some, but not all, would compare what they are doing to the starting of a religion. I think that’s pretty accurate based on what I’ve seen so far. Where I’m from, we’ve always talked about Singularitarianism as “eschatology for nerds”. But here we have all these ideas–the Singularity, “catastrophic risk”, the intellectual and ethical demands of “science”, the potential of immortality through transhumanist medicine, etc.–really motivating people to get together, form a community, advance certain practices and investigations, and proselytize.

I guess what I’m saying is: I don’t think it’s just a joke any more. There is actually a religion starting up around this. Granted, I’m in California now and as far as I can tell there are like sixty religions out here I’ve never heard of (I chalk it up to the lack of population density and suburban sprawl). But this one has some monetary and intellectual umph behind it.

Personally, I find this whole gestalt both attractive and concerning. As you might imagine, diversity is not this group’s strong suit. And its intellectual milieu reflects its isolation from the academic mainstream in that it lacks the kind of checks and balances afforded by multidisciplinary politics. Rather, it appears to have more or less declared the superiority of its methodological and ideological assumptions to its satisfaction and convinced itself that it’s ahead of the game. Maybe that’s true, but in my own experience, that’s not how it really works. (I used to share most of the tenets of this rationalist ideology, but have deliberately exposed myself to a lot of other perspectives since then [I think that taking the Bayesian perspective seriously necessitates taking the search for new information very seriously]. Turns out I used to be wrong about a lot of things.)

So if I were to make a prediction, it would go like this. One of these things is going to happen:

  • This group is going to grow to become a powerful but insulated elite with an expanded network and increasingly esoteric practices. An orthodox cabal seizes power where they are able, and isolates itself into certain functional roles within society with a very high standard of living.
  • In order to remain consistent with its own extraordinarily high epistemic standards, this network starts to assimilate other perspectives and points of view in an inclusive way. In the process, it discovers humility, starts to adapt proactively and in a decentralized way, losing its coherence but perhaps becomes a general influence on the preexisting societal institutions rather than a new one.
  • Hybrid models. Priesthood/lay practitioners. Or denominational schism.

There is a good story here, somewhere. If I were a journalist, I would get in on this and publish something about it, just because there is such a great opportunity for sensationalist exploitation.

Spaghetti, meet wall (on public intellectuals, activists in residence, and just another existential crisis of a phd student)

I have a backlog of things I’ve been planning to write about. It’s been a fruitful semester for me, in a number of ways. At the risk of being incoherent, I thought I’d throw some of my spaghetti against the Internet wall. It’s always curious to see what sticks.

One of the most fascinating intellectual exchanges of the past couple months for me was what I’d guess you could call the Morozov/Johnson debate. Except it wasn’t a debate. It was a book, then a book review (probably designed to sell a different book), and a rebuttal. It was fantastic showmanship. I have never felt so much like I was watching a boxing match while reading stuff on the Internet.

But what really made it for me was the side act of Henry Farrell taking Morozov to task. Unlike the others, I’ve met Farrell. He was kind enough to talk to me about his Cognitive Democracy article (which I was excited about) and academia in general (“there are no academic jobs for brilliant generalists”) last summer when I was living in DC. He is very smart, and not showy. What was cool about his exchange with Morozov was that it showed how a debate that wasn’t designed to sell books could still leak out into the public. There’s still a role for the dedicated academic, as a watchdog on public intellectuals who one could argue have to get sloppier to entertain the public.

An intriguing fallout from (or warm up to?) the whole exchange was Morozov casually snarking Nick Grossman‘s title, “Activist in Residence” at a VC fund in a tweet (“Another sign of the coming Apocalypse? Venture capital firms now have “activists in residence”? “), which then triggered some business press congratulating Nick for being “out in the streets”. Small world, I used to work with Nick at OpenPlans, and can vouch for his being a swell guy with an experienced and nuanced view of technology and government. He has done a lot of pioneering, constructive work on open governance applications–just the sort of constructive work a hater like Morozov would hate if he looked into it some. Privately, he’s told me he’s well aware of the potential astroturfing connotations of his title.

I got mixed feelings about all this. I’m suspicious of venture capital for the kind of vague “capital isn’t trustworthy” reasons you pick up in academia. Activism is sexy, lobbyists are not, and so if you can get away with calling your lobbyist an activist in residence then clearly that’s a step up.

But I think there’s something a little more going on here, which has to do with the substance of the debate. As I understand it, the Peer Progressives believe that social and economic progress can happen through bottom-up connectivity supported by platforms that are potentially run for profit. If you’re a VC, you’d want to invest in one of them platforms, because they are The Future. Nevertheless, you believe stuff happens by connecting people “on the ground”, not targeting decision-makers who are high in a hierarchy.

In Connected, Christakis and Fowler (or some book like it, I’m been reading a lot of them lately and having a hard time keeping track) make the interesting argument that the politics of protesters in the streets and lobbyists aren’t much different. What’s different is the centrality of the actor in the social network of governance. If you know a lot of senators, you’re probably a lobbyist. If you have to hold a sign and shout to have your political opinions heard, then you might be an activist.

I wonder who Nick talks to. Is he schmoozing with the Big Players? Or is he networking the base and trying to spur coordinated action on the periphery? I really have no idea. But if it were the latter, maybe that would give credibility to his title.

Another difference between activists and lobbyists is their authenticity. I have no doubt that Nick believes what he writes and advocates for. I do wonder how much he restrains himself based on his employers’ interests. What would prove he was an activist, not a lobbyist, would be if he were given a longer leash and allowed to speak out on controversial issues in a public way.

I’m mulling over all of this because I’m discovering in grad school that as an academic, you have to pick an audience. Are you targeting your work at other academics? At the public? At the press? At the government? At industry? At the end of the day, you’re writing something and you want somebody else to read it. If I’m lucky, I’ll be able to build something and get some people to use it, but that’s an ambitious thing to attempt when you’re mainly working alone.

So far some of my most rewarding experiences writing in academia have been blogging. It doesn’t impress anybody important but a traffic spike can make you feel like you’re on to something. I’ve been in a world of open work for a long time, and just throwing the spaghetti and trying to see what sticks has worked well for me in the past.

But if you try to steer yourself deeper into the network, the stakes get higher. Things get more competitive. Institutions are more calcified and bureaucratic and harder to navigate. You got to work to get anywhere. As it should be.

Dang, I forgot where I was going with this.

Maybe that’s the problem.