Digifesto

A troubling dilemma

I’m troubling over the following dilemma:

On the one hand, serendipitous exposure to views unlike your own is good, because that increases the breadth of perspective that’s available to you. You become more cosmopolitan and tolerant.

On the other hand, exposure to views that are hateful, stupid, or evil can be bad, because this can be hurtful, misinforming, or disturbing. Broadly, content can harm.

So, suppose you are deciding what to expose yourself to, or others to, either directly or through the design of some information system.

This requires making a judgment about whether exposure to that perspective will be good or bad.

How is it possible to make that judgment without already having been exposed to it?

Put another way, filter bubbles are sometimes good and sometimes bad. How can you tell the difference, from within a bubble, about whether bridging to another bubble is worthwhile? How could you tell from outside of a bubble? Is there a way to derive this from the nature of bubbles in the abstract?

developing a nuanced view on transparency

I’m a little late to the party, but I think I may at last be developing a nuanced view on transparency. This is a personal breakthrough about the importance of privacy that I owe largely to the education I’m getting at Berkeley’s School of Information.

When I was an undergrad, I also was a student activist around campaign finance reform. Money in politics was the root of all evil. We were told by our older, wiser activist mentors that we were supposed to lay the groundwork for our policy recommendation and then wait for journalists to expose a scandal. That way we could move in to reform.

Then I worked on projects involving open source, open government, open data, open science, etc. The goal of those activities is to make things more open/transparent.

My ideas about transparency as a political, organizational, and personal issue originated in those experiences with those movements and tactics.

There is a “radically open” wing of these movements which thinks that everything should be open. This has been debunked. The primary way to debunk this is to point out that less privileged groups often need privacy for reasons that more privileged advocates of openness have trouble understanding. Classic cases of this include women who are trying to evade stalkers.

This has been expanded to a general critique of “big data” practices. Data is collected from people who are less powerful than people that process that data and act on it. There has been a call to make the data processing practices more transparent to prevent discrimination.

A conclusion I have found it easy to draw until relatively recently is: ok, this is not so hard. What’s important is that we guarantee privacy for those with less power, and enforce transparency on those with more power so that they can be held accountable. Let’s call this “openness for accountability.” Proponents of this view are in my opinion very well-intended, motivated by values like justice, democracy, and equity. This tends to be the perspective of many journalists and open government types especially.

Openness for accountability is not a nuanced view on transparency.

Here are some examples of cases where an openness for accountability view can go wrong:

  • Arguably, the “Gawker Stalker” platform for reporting the location of celebrities was justified by an ‘opennes for accountability’ logic. Jimmy Kimmel’s browbeating of Emily Gould indicates how this can be a problem. Celebrity status is a form of power but also raises ones level of risk because there is a small percentage of the population that for unfathomable reasons goes crazy and threatens and even attacks people. There is a vicious cycle here. If one is perceived to be powerful, then people will feel more comfortable exposing and attacking that person, which increases their celebrity, increasing their perceived power.
  • There are good reasons to be concerned about stereotypes and representation of underprivileged groups. There are also cases where members of those groups do things that conform to those stereotypes. When these are behaviors that are ethically questionable or manipulative, it’s often important organizationally for somebody to know about them and act on them. But transparency about that information would feed the stereotypes that are being socially combated on a larger scale for equity reasons.
  • Members of powerful groups can have aesthetic taste and senses of humor that are offensive or even triggering to less powerful groups. More generally, different social groups will have different and sometimes mutually offensive senses of humor. A certain amount of public effort goes into regulating “good taste” and that is fine. But also, as is well known, art that is in good taste is often bland and fails to probe the depths of the human condition. Understanding the depths of the human condition is important for everybody but especially for powerful people who have to take more responsibility for other humans.
  • This one is based on anecdotal information from a close friend: one reason why Congress is so dysfunctional now is that it is so much more transparent. That transparency means that politicians have to be more wary of how they act so that they don’t alienate their constituencies. But bipartisan negotiation is exactly the sort of thing that alienates partisan constituencies.

If you asked me maybe two years ago, I wouldn’t have been able to come up with these cases. That was partly because of my positionality in society. Though I am a very privileged man, I still perceived myself as an outsider to important systems of power. I wanted to know more about what was going on inside important organizations and was frustrated by my lack of access to it. I was very idealistic about wanting a more fair society.

Now I am getting older, reading more, experiencing more. As I mature, people are trusting me with more sensitive information, and I am beginning to anticipate the kinds of positions I may have later in my career. I have begun to see how my best intentions for making the world a better place are at odds with my earlier belief in openness for accountability.

I’m not sure what to do with this realization. I put a lot of thought into my political beliefs and for a long time they have been oriented around ideas of transparency, openness, and equity. Now I’m starting to see the social necessity of power that maintains its privacy, unaccountable to the public. I’m starting to see how “Public Relations” is important work. A lot of what I had a kneejerk reaction against now makes more sense.

I am in many ways a slow learner. These ideas are not meant to impress anybody. I’m not a privacy scholar or expert. I expect these thoughts are obvious to those with less of an ideological background in this sort of thing. I’m writing this here because I see my current role as a graduate student as participating in the education system. Education requires a certain amount of openness because you can’t learn unless you have access to information and people who are willing to teach you from their experience, especially their mistakes and revisions.

I am also perhaps writing this now because, who knows, maybe one day I will be an unaccountable, secretive, powerful old man. Nobody would believe me if I said all this then.

writing about writing

Years ago on a now defunct Internet forum, somebody recommended that I read a book about the history of writing and its influence on culture.

I just spent ten minutes searching through my email archives trying to find the reference. I didn’t find it.

I’ve been thinking about writing a lot lately. And I’ve been thinking about writing especially tonight, because I was reading this essay that is in a narrow sense about Emily Gould but in a broad sense is about writing.*

I used to find writing about writing insufferable because I thought it was lazy. Only writers with nothing to say about anything else write about writing.

I don’t disagree with that sentiment tonight. Instead I’ve succumbed to the idea that actually writing is a rather specialized activity that is perhaps special because it affords so much of an opportunity to scrutinize and rescrutinize in ways that everyday social interaction does not. By everyday social interaction, I mean specifically the conversations I have with people that are physically present. I am not referring to the social interactions that I conduct through writing with sometimes literally hundreds of people at a time, theoretically, but actually more on the order of I don’t know twenty, every day.

The whole idea that you are supposed to edit what you write before you send it presupposes a reflective editorial process where text, as a condensed signal, is the result of an optimization process over possible interpretations that happens before it is ever emitted. The conscious decision to not edit text as one writes it is difficult if not impossible for some people but for others more…natural. Why?

The fluidity with which writing can morph genres today–it’s gossip, it’s journalism, it’s literature, it’s self expression reflective of genuine character, it’s performance of an assumed character, it’s…–is I think something new.


* Since writing this blog post, I have concluded that this article is quite evil.

It all comes back to Artificial Intelligence

I am blessed with many fascinating conversations every week. Because of the field I am in, these conversations are mainly about technology and people and where they intersect.

Sometimes they are about philosophical themes like how we know anything, or what is ethical. These topics are obviously relevant to an academic researcher, especially when one is interested in computational social science, a kind of science whose ethics have lately been called into question. Other times they are about the theoretical questions that such a science should or could address, like: how do we identify leaders? Or determine what are the ingredients for a thriving community? What is creativity, and how can we mathematically model how it arises from social interaction?

Sometimes the conversations are political. Is it a problem that algorithms are governing more of our political lives and culture? If so, what should we do about it?

The richest and most involved conversations, though, are about artificial intelligence (AI). As a term, it has fallen out of fashion. I was very surprised to see it as a central concept in Bengio et al.’s “Representation Learning: A Review and New Perspectives” [arXiv]. In most discussion scientific computing or ‘data science’ for the most part people have abandoned the idea of intelligent machines. Perhaps this is because so many of the applications of this technology seem so prosaic now. Curating newsfeeds, for example. That can’t be done intelligently. That’s just an algorithm.

Never mind that the origins of all of what we now call machine learning was in the AI research program, which is as old as computer science itself and really has grown up with it. Marvin Minsky famously once defined artificial intelligence as ‘whatever humans still do better than computers.’ And this is the curse of the field. With every technological advance that is at the time mind-blowingly powerful, performing a task that it used to require hundreds of people to perform, it very shortly becomes mere technology.

It’s appropriate then that representation learning, the problem of deriving and selecting features from a complex data set that are valuable for other kinds of statistical analysis in other tasks, is brought up in the context of AI. Because this is precisely the sort of thing that people still think they are comparatively good at. A couple years ago, everyone was talking about the phenomenon of crowdsourced image tagging. People are better at seeing and recognizing objects in images than computers, so in order to, say, provide the data for Google’s Image search, you still need to mobilize lots of people. You just have to organize them as if they were computer functions so that you can properly aggregate their results.

On of the earliest tasks posed to AI, the Turing Test, proposed and named after Alan Turing, the inventor of the fricking computer, is the task of engaging in conversation as if one is a human. This is harder than chess. It is harder than reading handwriting. Something about human communication is so subtle that it has withstood the test of time as an unsolved problem.

Until June of this year, when a program passed the Turing Test in the annual competition. Conversation is no longer something intelligent. It can be performed by a mere algorithm. Indeed, I have heard that a lot of call centers now use scripted dialog. An operator pushes buttons guiding the caller through a conversation that has already been written for them.

So what’s next?

I have a proposal: software engineering. We still don’t have an AI that can write its own source code.

How could we create such an AI? We could use machine learning, training it on data. What’s amazing is that we have vast amounts of data available on what it is like to be a functioning member of a software development team. Open source software communities have provided an enormous corpus of what we can guess is some of the most complex and interesting data ever created. Among other things, this software includes source code for all kinds of other algorithms that were once considered AI.

One reason why I am building BigBang, a toolkit for the scientific analysis of software communities, is because I believe it’s the first step to a better understanding of this very complex and still intelligent process.

While above I have framed AI pessimistically–as what we delegate away from people to machines, that is unnecessarily grim. In fact, with every advance in AI we have come to a better understanding of our world and how we see, hear, think, and do things. The task of trying to scientifically understand how we create together and the task of developing an AI to create with us is in many ways the same task. It’s just a matter of how you look at it.

objectivity is powerful

Like “neoliberal”, “objectivity” in contemporary academic discourse is only used as a term of disparagement. It has fallen out of fashion to speak about “objectivity” in scientific language. It remains in fashion to be critical of objectivity in those disciplines that have been criticizing objectivity since at least the 70’s.

This is too bad because objectivity is great.

The criticism goes like this: scientists and journalists both used to say that they were being objective. There was a lot to this term. It could mean ‘disinterested’ or it could mean so rigorous as to be perfectly inter-subjective. It sounded good. But actually, all the scientists and journalists who claimed to be objective were sexist, racist, and lapdogs of the bourgeoisie. They used ‘objectivity’ as a way to exclude those who were interested in promoting social justice. Hence, anyone who claims to be objective is suspicious.

There are some more sophisticated arguments than this but their sophistication only weakens the main emotional thrust of the original criticism. The only reason for this sophistication is to be academically impressive, which is fundamentally useless, or to respond in good faith to criticisms, which is politically unnecessary and probably unwise.

Why is it unwise to respond in good faith to criticisms of a critique of objectivity? Because to concede that good faith response to criticism is epistemically virtuous would be to concede something to the defender of objectivity. Once you start negotiating with the enemy in terms of reasons, you become accountable to some kind of shared logic which transcends your personal subjectivity, or the collective subjectivity of those whose perspectives are channeled in your discourse.

In a world in which power is enacted and exerted through discourse, and in which cultural logics are just rules in a language game provisionally accepted by players, this rejection of objectivity is true resistance. The act of will that resists logical engagement with those in power will stymie that power. It’s what sticks it to the Man.

The problem is that however well-intentioned this strategy may be, it is dumb.

It is dumb because as everybody knows, power isn’t exerted mainly through discourse. Power is exerted through violence. And while it may be fun to talk about “cultural logics” if you are a particular kind of academic, and even fun to talk about how cultural logics can be violent, that is vague metaphorical poetry compared to something else that they could be talking about. Words don’t kill people. Guns kill people.

Put yourself in the position of somebody designing and manufacturing guns. What do you talk about with your friends and collaborators? If you think that power is about discourse, then you might think that these people talk about their racist political agenda, wherein they reproduce the power dynamics that they will wield to continue their military dominance.

They don’t though.

Instead what they talk about is the mechanics of how guns work and the technicalities of supply chain management. Where are they importing their gunpowder from and how much does it cost? How much will it go boom?

These conversations aren’t governed by “cultural logics.” They are governed by logic. Because logic is what preserves the intersubjective validity of their claims. That’s important because to successful build and market guns, the gun has to go boom the same amount whether or not the person being aimed at shares your cultural logic.

This is all quite grim. “Of course, that’s the point: objectivity is the language of violence and power! Boo objectivity!”

But that misses the point. The point is that it’s not that objectivity is what powerful people dupe people into believing in order to stay powerful. The point is that objectivity is what powerful people strive for in order to stay powerful. Objectivity is powerful in ways that more subjectively justified forms of knowledge are not.

This is not a popular perspective. There a number of reasons for this. One is that attain objective understanding is a lot of hard work and most people are just not up for it. Another is that there are a lot of people who have made their careers arguing for a much more popular perspective, which is that “objectivity” is associated with evil people and therefor we should reject it as an epistemic principal. There will always be an audience for this view, who will be rendered powerless by it and become the self-fulfilling prophecy of the demagogues who encourage their ignorance.

technical work

Dipping into Julian Orr’s Talking about Machines, an ethnography of Xerox photocopier technicians, has set off some light bulbs for me.

First, there’s Orr’s story: Orr dropped out of college and got drafted, then worked as a technician in the military before returning to school. He paid the bills doing technical repair work, and found it convenient to do his dissertation on those doing photocopy repair.

Orr’s story reminds me of my grandfather and great-uncle, both of whom were technicians–radio operators–during WWII. Their civilian careers were as carpenters, building houses.

My own dissertation research is motivated by my work background as an open source engineer, and my own desire to maintain and improve my technical chops. I’d like to learn to be a data scientist; I’m also studying data scientists at work.

Further fascinating was Orr’s discussion of the Xerox technician’s identity as technicians as opposed to customers:

The distinction between technician and customer is a critical division of this population, but for technicians at work, all nontechnicians are in some category of other, including the corporation that employs the technicians, which is seen as alien, distant, and only sometimes an ally.

It’s interesting to read about this distinction between technicians and others in the context of Xerox photocopiers when I’ve been so affected lately by the distinction between tech folk and others and data scientists and others. This distinction between those who do technical work and those who they serve is a deep historical one that transcends the contemporary and over-computed world.

I recall my earlier work experience. I was a decent engineer and engineering project manager. I was a horrible account manager. My customer service skills were abysmal, because I did not empathize with the client. The open source context contributes to this attitude, because it makes a different set of demands on its users than consumer technology does. One gets assistance with consumer grade technology by hiring a technician who treats you as a customer. You get assistance with open source technology by joining the community of practice as a technician. Commercial open source software, according to the Pentaho beekeeper model, is about providing, at cost, that customer support.

I’ve been thinking about customer service and reflecting on my failures at it a lot lately. It keeps coming up. Mary Gray’s piece, When Science, Customer Service, and Human Subjects Research Collide explicitly makes the connection between commercial data science at Facebook and customer service. The ugly dispute between Gratipay (formerly Gittip) and Shanley Kane was, I realized after the fact, a similar crisis between the expectations of customers/customer service people and the expectations of open source communities. When “free” (gratis) web services display a similar disregard for their users as open source communities do, it’s harder to justify in the same way that FOSS does. But there are similar tensions, perhaps. It’s hard for technicians to empathize with non-technicians about their technical problems, because their lived experience is so different.

It’s alarming how much is being hinged on the professional distinction between technical worker and non-technical worker. The intra-technology industry debates are thick with confusions along these lines. What about marketing people in the tech context? Sales? Are the “tech folks” responsible for distributional justice today? Are they in the throws of an ideology? I was reading a paper the other day suggesting that software engineers should be held ethically accountable for the implicit moral implications of their algorithms. Specifically the engineers; for some reason not the designers or product managers or corporate shareholders, who were not mentioned. An interesting proposal.

Meanwhile, at the D-Lab, where I work, I’m in the process of navigating my relationship between two teams, the Technical Team, and the Services Team. I have been on the Technical team in the past. Our work has been to stay on top of and assist people with data science software and infrastructure. Early on, we abolished regular meetings as a waste of time. Naturally, there was a suspicion expressed to me at one point that we were unaccountable and didn’t do as much work as others on the Services team, which dealt directly with the people-facing component of the lab–scheduling workshops, managing the undergraduate work-study staff. Sitting in on Services meetings for the first time this semester, I’ve been struck by how much work the other team does. By and large, it’s information work: calendering, scheduling, entering into spreadsheets, documenting processes in case of turnover, sending emails out, responding to emails. All important work.

This is exactly the work that information technicians want to automate away. If there is a way to reduce the amount of calendering and entering into spreadsheets, programmers will find a way. The whole purpose of computer science is to automate tasks that would otherwise be tedious.

Eric S. Raymond’s classic (2001) essay How to Become a Hacker characterizes the Hacker Attitude, in five points:

  1. The world is full of fascinating problems waiting to be solved.
  2. No problem should ever have to be solved twice.
  3. Boredom and drudgery are evil.
  4. Freedom is good.
  5. Attitude is no substitute for competence.

There is no better articulation of the “ideology” of “tech folks” than this, in my opinion, yet Raymond is not used much as a source for understanding the idiosyncracies of the technical industry today. Of course, not all “hackers” are well characterized by Raymond (I’m reminded of Coleman’s injunction to speak of “cultures of hacking”) and not all software engineers are hackers (I’m sure my sister, a software engineer, is not a hacker. For example, based on my conversations with her, it’s clear that she does not see all the unsolved problems with the world to be intrinsically fascinating. Rather, she finds problems that pertain to some human interest, like children’s education, to be most motivating. I have no doubt that she is a much better software engineer than I am–she has worked full time at it for many years and now works for a top tech company. As somebody closer to the Raymond Hacker ethic, I recognize that my own attitude is no substitute for that competence, and hold my sister’s abilities in very high esteem.)

As usual, I appear to have forgotten where I was going with this.

frustrations with machine ethics

It’s perhaps because of the contemporary two cultures problem of tech and the humanities that machine ethics is in such a frustrating state.

Today I read danah boyd’s piece in The Message about technology as an arbiter of fairness. It’s more baffling conflation of data science with neoliberalism. This time, the assertion was that the ideology of the tech industry is neoliberalism hence their idea of ‘fairness’ is individualist and against social fabric. It’s not clear what backs up these kinds of assertions. They are more or less refuted by the fact that industrial data science is obsessed with our network of ties for marketing reasons. If anybody understands the failure of the myth of the atomistic individual, it’s “tech folks,” a category boyd uses to capture, I guess, everyone from marketing people at Google to venture capitalists to startup engineers to IBM researchers. You know, the homogenous category that is “tech folks.”

This kind of criticism makes the mistake of thinking that a historic past is the right way to understand a rapidly changing present that is often more technically sophisticated than the critics understand. But critical academics have fallen into the trap of critiquing neoliberalism over and over again. One problem is that tech folks don’t spend a ton of time articulating their ideology in ways that are convenient for pop culture critique. Often their business models require rather sophisticated understandings of the market, etc. that don’t fit readily into that kind of mold.

What’s needed is substantive progress in computational ethics. Ok, so algorithms are ethically and politically important. What politics would you like to see enacted, and how do you go about implementing that? How do you do it in a way that attracts new users and is competitively funded so that it can keep up with the changing technology with which we use to access the web? These are the real questions. There is so little effort spent trying to answer them. Instead there’s just an endless series of op-ed bemoaning the way things continue to be bad because it’s easier than having agency about making things better.

notes towards benign superintelligence j/k

Nick Bostrom will give a book talk on campus soon. My departmental seminar on “Algorithms as Computation and Culture” has opened with a paper on the ethics of algorithms and a paper on accumulated practical wisdom regarding machine learning. Of course, these are related subjects.

Jenna Burrell recently trolled me in order to get me to give up my own opinions on the matter, which are rooted in a philosophical functionalism. I’ve learned just now that these opinions may depend on obsolete philosophy of mind. I’m not sure. R. Scott Bakker’s blog post against pragmatic functionalism makes me wonder: what do I believe again? I’ve been resting on a position established when I was deeper into this stuff seven years ago. A lot has happened since then.

I’m turning into a historicist perhaps due to lack of imagination or simply because older works are more accessible. Cybernetic theories of control–or, electrical engineering theories of control–are as relevant, it seems, to contemporary debates as machine learning, which to the extent it depends on stochastic gradient descent is just another version of cybernetic control anyway, right?

Ashwin Parameswaran’s blog post about Benigner’s Control Revolution illustrates this point well. To a first approximation, we are simply undergoing the continuation of prophecies of the 20th century, only more thoroughly. Over and over, and over, and over, and over, like a monkey with a miniature cymbal.

CHIMP1

One property of a persistent super-intelligent infrastructure of control would be our inability to comprehend it. Our cognitive models, constructed over the course of a single lifetime with constraints on memory both in time and space, limited to a particular hypothesis space, could simply be outgunned by the complexity of the sociotechnical system in which it is embedded. I tried to get at this problem with work on computational asymmetry but didn’t find the right audience. I just learned there’s been work on this in finance which makes sense, as it’s where it’s most directly relevant today.

more on algorithms, judgment, polarization

I’m still pondering the most recent Tufekci piece about algorithms and human judgment on Twitter. It prompted some grumbling among data scientists. Sweeping statements about ‘algorithms’ do that, since to a computer scientist ‘algorithm’ is about as general a term as ‘math’.

In later conversation, Tufekci clarified that when she was calling out the potential problems of algorithmic filtering of the Twitter newsfeed, she was speaking to the problems of a newsfeed curated algorithmically for the sake of maximizing ‘engagement’. Or ads. Or, it is apparent on a re-reading of the piece, new members. She thinks an anti-homophily algorithm would maybe be a good idea, but that this is so unlikely according to the commercial logic of Twitter to be a marginal point. And, meanwhile, she defends ‘human prioritizatin’ over algorithmic curation, despite the fact that homophily (not to mention preferential attachment) are arguable negative consequences of social system driven by human judgment.

I think inquiry into this question is important, but bound to be confusing to those who aren’t familiar in a deep way with network science, machine learning, and related fields. It’s also, I believe, helpful to have a background in cognitive science, because that’s a field which maintains that human judgment and computational systems are doing fundamentally commensurable kinds of work. When we think in sophisticated way about crowdsourced labor, we use this sort of thinking. We acknowledge, for example, that human brains are better at the computational task of image recognition, so then we employ Turkers to look at and label images. But those human judgments are then inputs to statistical proceses that verify and check those judgments against each other. Later, those determinations that result from a combination of human judgment and algorithmic processing could be used in a search engine–which returns answers to questions based on human input. Search engines, then, are also a way of combining human and purely algorithmic judgment.

What it comes down to is that virtually all of our interactions with the internet are built around algorithmic affordances. And these systems can be understood systematically if we reject the quantitative/qualitative divide at the ontological level. Reductive physicalism entails this rejection, but–and this is not to be underestated–pisses or alienates people who do qualitative or humanities research.

This is old news. C.P. Snow’s The Two Cultures. The Science Wars. We’ve been through this before. Ironically, the polarization is algorithmically visible in the contemporary discussion about algorithms.*

The Two Cultures on Twitter?

It’s I guess not surprising that STS and cultural studies academics are still around and in opposition to the hard scientists. What’s maybe new is how much computer science now affects the public, and how the popular press appears to have allied itself with the STS and cultural studies view. I guess this must be because cultural anthropologists and media studies people are more likely to become journalists and writers, whereas harder science is pretty abstruse.

There’s an interesting conflation now from the soft side of the culture wars of science with power/privilege/capitalism that plays out again and again. I bump into it in the university context. I read about it all the time. Tufekci’s pessimism that the only algorihtmic filtering Twitter would adopt would be one that essentially obeys the logic “of Wall Street” is, well, sad. It’s sad that an unfortunate pairing that is analytically contingent is historically determined to be so.

But there is also something deeply wrong about this view. Of course there are humanitarian scientists. Of course there is a nuanced center to the science wars “debate”. It’s just that the tedious framing of the science wars has been so pervasive and compelling, like a commercial jingle, that it’s hard to feel like there’s an audience for anything more subtle. How would you even talk about it?

* I need to confess: I think there was some sloppiness in that Medium piece. If I had had more time, I would have done something to check which conversations were actually about the Tufekci article, and which were just about whatever. I feel I may have misrepresented this in the post. For the sake of accessibility or to make the point, I guess. Also, I’m retrospectively skittish about exactly how distinct a cluster the data scientists were, and whether its insularity might have been an artifact of the data collection method. I’ve been building out poll.emic in fits mainly as a hobby. I built it originally because I wanted to at last understand Weird Twitter’s internal structure. The results were interesting but I never got to writing them up. Now I’m afraid that the culture has changed so much that I wouldn’t recognize it any more. But I digress. Is it even notable that social scientists from different disciplines would have very different social circles around them? Is the generalization too much? And are there enough nodes in this graph to make it a significant thing to say about anything, really? There could be thousands of academic tiffs I haven’t heard about that are just as important but which defy my expectations and assumptions. Or is the fact that Medium appears to have endorsed a particular small set of public intellectuals significant? How many Medium readers are there? Not as many as there are Twitter users, by several orders of magnitude, I expect. Who matters? Do academics matter? Why am I even studying these people as opposed to people who do more real things? What about all the presumabely sane and happy people who are not pathologically on the Internet? Etc.

Follow

Get every new post delivered to your Inbox.

Join 924 other followers