Digifesto

“Weird Twitter” art experiment method notes and observations

First, I got to say: Weird twitter definitely exists, and it is bigger and weirder than I imagined.

I want to write up some notes on my methodology for determining this, but I feel like some self-disclosure is in order.

I’m a PhD student with research interests that include community formation on the internet and collective intelligence. I’ve been studying theories about how communities establish their boundaries using symbols, and am also interested in “collective sensemaking.”

I am 27 years old and have been on the internet for long enough to know what I’m doing. I am really into conceptual art.

I’ve been aware of what I’ve referred to as “weird twitter” for some time, and have been curious what’s going on. I love it and love that it exists. But I didn’t know if it was real, or just something I was peripherally aware of because I followed a few people. It was much, much deeper than I had the patience to venture into at the time, but I had no sense of its scale.

Unfortunately, doing analysis on a gigantic unstructured digital social network turns out to be one of the big challenges of contemporary social science research. You either need to slurp a lot of data into something that crunches numbers, or you have to painstakingly research individuals in a tedious way that is not going to give you any general results on the scale necessary for this problem. So I tried a new method.

This method, which I don’t have a good name for, is basically: call it names, and see if it answers back. In other words, trolling.

I did this in August on a lark.

That post was an experiment.

  • Suppose “weird twitter” did not exist. Then there would be no reason for anybody to identify with its content. It would be a blog post lost in obscurity, like most of my blog posts.
  • But if “weird twitter” did exist, then there was a chance that it would react to its label in a statistically significant way.

I’ll adopt some speculative language for a moment: what if “weird twitter” were a kind of collective intelligence? Is it self-aware?

There are many theories of how self-awareness arises. Some believe that a person’s self-awareness depends on their interacting socially with others. It will not be a self until it is treated like a self. Until then, it will exist in some pre-conscious, animal state.

Others have argued that the Internet is creating a “global brain” of collective intelligence. This raises questions implicated by but far more interesting than the question of whether “corporations are people”. In what ways can a collection of people be a person? Do they need to self-identify as a community before that happens?

So many interesting questions.

Of course, if it were true that “weird twitter” were just a bunch of people telling jokes, and not a community, identity, culture, or collective intelligence, then a blog post about them would be meaningless and ephemeral.

For fun, I made the post extra obtuse.

I should say: “Weird twitter” seemed like a fun bunch, mostly just a bunch of jokers who don’t take things too seriously. So there was no way such a post would be taken seriously unless, well, I was wrong, and some people took it very, very seriously.

null

I have never gotten more hate spam in my life. Holy crap.

It is a really good thing I have a thick skin, because the amount of abuse I’ve put up with in the past 48 hours has been intense. There has also been a pretty epic amount of disdain and even a little attempted character assassination….

A note about this:

Ok, I need to address this directly, partly because it is the sort of thing that can really ruin ones reputation, and partly because I think it raises some pretty interesting questions on feminism on the internet.

Kimmy (@aRealLiveGhost) is a talented poet whose work I generally like and have recommended to others who appreciate poetry. (Think her reconfigurations of @horse_ebooks tweets are her best work.) As far as I know, she got her start just tweeting authentically. At some point, she started posting pictures of herself along with her poetry. She also seemed alarmed by the number of followers she was getting.


That was in January, which was before she was a minor internet celebrity with thousands of followers. However, one source (see comments to this post) has noted considerable overlap between “weird twitter” and the feminist twitter landscape. In light of this whole art project/experiment thing, Kimmy referred to that tweet, which generated some discussion.

As I’ve learned, this comment bothered Kimmy, and I’ve apologized. As I’ve explained, my intention was to point out that there might be some connection between (especially a woman) posting cute pictures of herself on the internet and her suddenly getting a lot of attention on the internet. The recent Violentacrez scandal highlights the extremes of this, and why I might be concerned on her behalf.

This comment, which some have called “anti-woman”, has been variously interpreted as:

  • “mansplaining”, presumably because I should know already that all women on the internet know that putting cute pictures of themselves will get them a lot of attention/followers/whatever.
  • insinuating that Kimmy’s success (in terms of Twitter followers I guess?) is undeserved or only because she put pictures of herself on the internet.

I take feminism rather seriously and so I found these accusations pretty hurtful, actually. But then I thought about it and realized that taken together, they make no sense. So, I’m over it.

EDIT: In the ensuing discussion over this, I’ve learned a lot about how feminists think about this comment. It’s reasonable for women to suspect that somebody making such a comment has hostile or demeaning intentions, and that problem is especially exacerbated in low-bandwidth computer mediated communication such as Twitter, where so much is left to interpretation. I regret saying it.

In other words, the experiment was a wild success in terms of generating a significant reaction. However, the results coming in were literally all over the map: random hate, denial that the phenomenon existed, direct confirmation that the phenomenon existed, questioning of the meaning of it. A surprising number of people telling me I had “ruined” something, or “didn’t get” something.

Basically, there was every possible angle of existential crisis represented in the response from the collective consciousness of weird twitter.

Or maybe subconscious. Some people on Twitter seem to see it as primarily an expression of the subconscious. Which would explain why it hates getting called out so much.

These results were nonetheless inconclusive. Weird twitter was being awakened from its subconscious, unreflective slumber. So I gave it a kick.

This post was of course the kind of postmodern ironic half-joke that seems to be so characteristic of “weird twitter” but I guess it went over the heads of a lot of people.

There’s a legitimate concern here that this post involved what the academic and mainstream press has termed “cyberbullying”. But I made a calculated decision that people who were actively being dickish about the whole thing to me directly were asking for it and could handle being made fun of. In case anyone else was concerned (one person who contacted me was), I spoke with@bugbucket and @hellhomer and we’re cool.

The point of the second post was:

  • As a measurement instrument. I had a good indication that Weird Twitter really did exist. But how big is it? I’ve got analytics set up, and figured there was no reason for somebody who wasn’t part of weird twitter to want to read a post about weird twitter. This would give a rough order of magnitude estimate at least.
  • To test the theory that an on-line community exists partly by negotiating its own symbolic boundaries, and to see if it would achieve self-consciousness if pressed on the issue.
  • To generate more data about digital communities reacting to external reification. The nice thing about all this is that Twitter stores probably 99% of all the relevant communication for this kind of identity formation process (or the failure of it), so at some point somebody might dig it up and check it out in more detail.

In case you are wondering, if you were to ask me “How many people do you think are part of Weird Twitter?”, I’d now say “about 3,000”, if you operationalize “weird twitter” as “the number of people who care enough about being called out as Weird Twitter to read an article about it”. There may, of course, be multiple or overlapping weird twitters. Maybe other parts of the “weird twitter” landscape could be identified by referring to other patterns of behavior. (Maybe there’s a weird twitter that tells completely different jokes than the ones identified in the original post) Perhaps this only got to the most sensitive or curious bunch, those that actively click links. There’s also no accounting for factors like time zone.

Really the next thing to do would be to try to map out the actual social network structure.

Qualitatively, there were a lot of interesting reactions and questions raised in this process. I want to note them here before I forget:

  • Because of the tone of the initial post, I was estimated to be older than I am, and I got some criticism that I was some weird old guy invading somebody else’s space. One person, presumably a teenager, tweeting angrily that I was exploiting teenagers.

  • Lots of people reacted to the feeling of being watched or categorized. That’s ironic, because what people post on Twitter is openly available, and many of the members of this community of literally thousands of “followers”. And, Twitter data as a whole is being slurped and analyzed and categorized all the time algorithmically for research and marketing purposes. The amount of outrage created by a blog post that WASN’T based on observation of most of the system suggests that people in Weird Twitter really don’t get this.
  • One of the smartest response I saw was somebody who suggested making their posts more private to avoid having them looked at by people like me. Yes, that is correct. I was pulling a prank on you. I am the least of your problems.
  • Those who I guess you could call the “thought leaders” in the Weird Twitter community are experts at managing information flow. While several members of the community passed around links to my post directly, others were quite deliberate in posting links to images that would not be traced back here. My favorite posts were those that obliquely acknowledged there was a controversy going on with no navigable links at all.
  • I was definitely “othered” throughout the whole process, despite the fact that I’ve been using Twitter and interacting with a few of the members of this community in a peripheral way for a while, and the claims by some of its members that it’s just a community of people making jokes than anyone can join. (If it is the latter, then I declare myself a member.) Since its central members appear to have more followers than they can keep track of, it’s not surprising that they would see me as an outsider, especially given the estranged language and alternative platform of the blog post. @hellhomer‘s observations that I was unqualified to comment on the community because I only shared a small number of connections was evidence that online community membership can be operationalized as membership in a quasi-clique structure.
  • A lot of people assumed I’m planning on writing an academic article about this, and thought that would be exploitative. In reality, I think there’s no way in hell I would get this past the IRB. This was performance art. Y’all are suckers. Funniest were the people that got on my case about the flimsiness of my analysis or research methods. Funniest was the person that told me I really ought to be referencing Bruno Latour.
  • But, one day yeah maybe I’ll write an article about Weird Twitter. Obviously I’d go about it totally differently, though I might start with leads I’ve gotten through this project. I do believe that the best way to study radically transparent on-line communities is through radically transparent research (thanks Mel for introducing me to this term), which this experiment was an exercise in.
  • Who the hell posted this quora post on weird twitter? What’s their angle? Their insight that Weird Twitter is like the /b/ of Twitter is a bold claim, because fewer communities have had an impact on internet culture as great at /b/. Have any significant memes originated in Weird Twitter and escaped into the wild? Unclear. Are there other, similarly creative and unregulated pseudonymous communities in other social media?
  • I’ve been asked by one tweeter to ‘please explore the carefucker vs jokeman split amongst “weird twitter”‘. That is a useful research lead if I’ve ever seen one. “Carefucker” has not yet hit Urban Dictionary, but I guess the term is self-explanatory. Ironically, in my observations the most polished “jokemen” were also the most strategic and guarded about their references to being labeled, while the most authentically absurd appeared to be “carefucking”. I suspect that some folks were trying hard to be cool.
  • A significant portion of the reactions were people upset that I had “ruined” their “thing”, that thing which may or may not be weird twitter. If I had to guess, this is due to the perception that blog posts are less ephemeral than tweets, which is true, but also the illusion that what is phenomenologically ephemeral for them isn’t permanent in fact. As I said in my second post, there’s a weird power dynamic at work between blogs and tweets. But this is absurd. Because, if your attention span has been trained on blogs and not tweets, you realize that blog posts, too, are historically ephemeral. Most of the traffic to this post has been from Twitter itself. It is an artifact produced by Weird Twitter, not (as it has been accused of being) a voyeuristic or surveilling observation made on it from without. If this post has any significance within the history of that community, it will only be because the community’s consciousness of itself lead to a kind of dissolution (or suicide), or because its significance has outshone its containment within Twitter itself. Only time will tell on that one.
  • I have heard a lot of complaints about the prominence of internet trolls sending death threats to especially feminist bloggers. I find that really interesting, because I generally appreciate feminists and and do some research on internet security. It was pretty shocking how much vitriol I got exposed to for writing a blog post describing an internet community that maybe didn’t exist. I’ve assumed for the purpose of writing this that those people who attacked me were somehow motivated by anger at the blog post. But wouldn’t that be completely batshit? I mean, look at that first blog post. It’s dumb. I have an alternative theory, which is that there is a population on the internet that opportunistically hates on anybody who they think they can get away with hating on. This is a testable hypothesis, which if true would simplify the problem of cleaning up the mess. If hate speech on the internet were considered less a political issue and more an issue like spam detection and removal, I think the Internet would be a better place.
  • If you’ve read this far, then thank you for your interest. I’ve found this a very rewarding and insightful experience, and I hope you have gotten something out of it as well.

Weird Twitter: The Symbolic Construction of Community through Iterative Reification

Is there such a thing as Weird Twitter?

Earlier I wrote a blog post based on my (non-participant) observations of a Twitter subculture. Recently, there’s been some activity around it in the tweetosphere itself, which sheds light on how reification affects community development in social media.

This guy has almost 10,000 followers. Conversation analysis techniques indicate that he is upset at the reification of ‘weird twitter’ by an Other, ‘nerds’.

I have not yet been able to trace the reaction to these observations fully within Twitter itself. But preliminary results indicate discomfort within the alleged ‘weird twitter’ community as negotiates with its own boundaries in digital space.

https://twitter.com/Mobute/status/258370765492207618

Most of the reaction to the original post was dismissive (though I notice due to a sharp uptake in blog traffic that several people were intrigued enough to google for the post). But it just takes one stoned kid freaking out to escalate an irresponsible exaggeration of the truth into a reality.

https://twitter.com/BairaagiVN/status/258372824429912064

This guy then began tweeting indignantly, apparently offended that I would refer to ‘weird twitter’ without being part of his social circle.

https://twitter.com/hell_homer/status/258379366130663424

Twitter user @hell_homer (whose avatar depicts the popular Simpsons character, Homer, in hell) appears to not appreciate the irony that by reaching out to the people who might be concerned about the ‘weird twitter’ label, he is symbolically constructing the weird twitter community within the digital social space. @hell_homer autoreifies ‘weird twitter’ through his very acts of resistance.

He’s been going on like this now for like four hours.

Psychoanalytically, we might infer that @hell_homer’s suffers severe cognitive dissonance over his identification with “weird twitter”. Perhaps he identifies so strongly with weird twitter that he is offended by having the term appropriated by an outsider. Or perhaps he is concerned about his centrality with said ‘weird twitter’ community, and so seeks to embed himself further in it by taking responsibility for negotiation of its boundaries.

https://twitter.com/hell_homer/status/258401669979721729

He persists in denial, weaving himself a cocoon of spite.

Other members of this community are willing to volunteer contact information about its central figures.

https://twitter.com/YoungCaesar2000/status/258392244493635584

This suggests that ‘weird twitter’, rather than being a distributed social network, is rather more like a cult of personality, or personalities. Given the heavy-tail distribution of followers within Twitter and the immediacy of communication (no distance perceived by audience from “speaker”, even when the speaker has thousands of followers), this seems likely prima facie.

Others within ‘weird twitter’ react more violently to the application of the label:

Others became depressed:

https://twitter.com/bugbucket/status/258369378549108736

…but also recognized, at least subliminally, the “threat” of having ones publicly facing community whose members have tens of thousands of followers “discovered” by internet media:

https://twitter.com/bugbucket/status/258369674419507200

Is it the threat of exposure that is threatening? Or is it reifying gaze that comes with it? And how is that gaze constructed within the community as it is observed?

https://twitter.com/bugbucket/status/258370123168096256

Here we see a single act of observation abstracted into “people” who want to categorize “every” community on the internet. The initially dismissed ‘hogwash’ has become, through the symbolic construction of ‘weird twitter’ itself, a surveilling conspiracy, placed firmly in opposition.

This user then proceeded to tweet a piece of microfiction prophesying the future of his community.

Speculation: as an earlier and more persistent mode of internet discourse, blogs are viewed by digital natives who primarily use Twitter as a social networking platform as a matured, out-of-touch, and marginally more socially powerful force. Moreover, academic language’s distinction from the vernacular echos the power dynamics of meatspace into the digitally dual virtual world. These power dynamics problematize organic community growth.

telecom security and technonationalism

EDIT: This excellent article by Farhad Manjoo has changed my mind or at least my attitude about this issue. Except for the last paragraph, which I believe is a convergent truth.

Reading this report about the U.S. blocking Huawei telecom components in government networks is a bit chilling.

The U.S. invests a lot of money into research of anti-censorship technology that would, among other things, disrupt the autocratic control China maintains over its own network infrastructure.

So from the perspective of the military, telecommunications technology is a battlefield.

I think rightly so. The opacity and centrality of telecommunications and the difficulty of tracing cyber-security breaches make these into risky decisions.

The Economist’s line is:

So what is needed most is an international effort to develop standards governing the integrity and security of telecoms networks. Sadly, the House Intelligence Committee isn’t smart enough to see this.

That’s smug and doesn’t address the real security concerns, or the immense difficulty of establishing international standards on telecom security, let alone guaranteeing the implementation of those standards.

However, an easier solution than waiting for agreement among a standards body would be to develop an open hardware specification for the components that met the security standards and a system for verifying them. That would encourage a free market on secure telecom hardware, which Huawei and others could participate in if they liked.

a thoroughly academic dispute about the nature of “information”

I had an interesting conversation with Ashwin Matthew, a colleague in my department. Among other things, we were talking about the construction of the modern concept of information. You might expect that at a School of Information we’d have this issue thoroughly ironed out by now, but in fact, the opposite.

Doing my best to characterize the debate, it’s this: information seems to be very important. Why is that? Is it because a bunch of institutions and/or social powers have an interest in promoting a concept of information that reinforces their power? Or is it because there is a real thing, which we call information, which is out there in the world and influential whether we understand it or not? To put it another way: is “information” an ideological construct or does it pick out something real in the world?

Ashwin calls the social order that depends on the information ideology the Information Society, which passes the Wikipedia test for ‘is it real?’. So does information per se.

I want to call the view that “information” is an ideological construct information antirealism. An argument for information antirealism comes from Geoffrey Nunberg’s “Farewell to the Information Age”, which notes that the modern usage of “information” is historically recent and asserts that this usage is the result of a confusion (either intentional or otherwise) of other meanings. I want to call the view that information is a real thing in the world that we should be paying attention to like scientists information realism.

I would say that I am an information realist. However, if there was ever somebody thoroughly embedded in the information society, it would be me. So, naturally, I would believe the ideology of that society, which is information realism. Ashwin, I believe, would rather bracket the term “information” out so as to be able to critique the Information Society. That seems like a worthy endeavor to me.

But I got to say that I think “information” as a concept has more going for it than its ideological role in supporting the information society. Rather, I think it supports the information society because it reflects a truth about the universe. Just because there is a lot of hype around something doesn’t make it unreal. The Bronze Age was pretty psyched about bronze (I’m guessing). There was certainly a class of people who benefited from bronzeworking. But what made them so powerful wasn’t “bronze” as an ideological construct, but rather their mastery of a part of reality, bronze.

By way of analogy, I’ve been reading Piaget’s “The Child’s Conception of Time” for a class. In it, Piaget describes the cognitive evolution of the concept of time. Piaget relates that as toddlers, children confuse time and space, thinking that something that has traveled greater distance must have taken more time (not understanding how velocity plays a mediating role). Later, children develop “articulate intuitions” of things like succession and duration, but these do not fall into any kind of ‘operational order’. By this Piaget means that when we discuss timing and aging with children at this stage, they will articulate statements about succession and duration but what they say will seem to be full of contradictions or naivete. For example, they will believe that over time, their age will “catch up” with adults. At the last stage, the child achieves an “operational synthesis” and discovers or constructs a sense of time that is homogenous and uniform and the sort of thing that allows you to effectively use clocks.

So, here’s a proposal: that the modern concept of information is an “operational synthesis” of other, more naive “articulate intuitions” about information that have occurred historically.

What is an “operational synthesis”? Well, it’s constructed. But it’s a construct that gets a lot done. You might say that a good operational synthesis is true in at the very least a pragmatist sense.

So, maybe our modern understanding of “information” is as legitimate as the modern understanding of “time”. Which is to say: resistance is useless, prepare to be assimilated into the Information Society.

Follow the money back to the media

I once cared passionately about the impact of money in politics. I’ve blogged about it here a lot. Long ago I campaigned for fair elections. I went to work at a place where I thought I could work on tools to promote government transparency and electoral reform. This presidential election, I got excited about Rootstrikers. I vocally supported Buddy Roemer. Of course, the impact of any of these groups is totally marginal, and my impact within them even more so. Over the summer, I volunteered at a Super PAC, partly to see if there was any way the system could be improved from the inside. I found nothing.

I give up. I don’t believe there’s a way to change the system. I’m going to stop complaining about it and just accept the fact that democracy is a means of balancing different streams of money and power, full stop.

There is silver lining to the cloud. The tools for tracking where campaign donations are coming from are getting better and better. MapLight, for example, seems to do great work. So now we can know which interests are represented in politics. We can sympathize with some and condemn others. We can cheer for our team. Great.

But something that’s often omitted in analysis of money in politics is: where does it go?

So far the most thorough report I’ve been able to find on this (read: first viable google hit) was this PBS News Hour. It breaks it down pretty much as you would expect. The money goes to:

  • Television ads. Since airtime is limited, this means that political ads were being aired very early on.
  • Political consultants who specialize in election tactics.
  • Paid canvassers, knocking door-to-door or making phone calls to engage voters.

Interesting that so much of the money flows to media outlets, who presumably raise prices for advertising when candidates are competing for it with deep pockets. So… the mainstream media benefits hugely from boundless campaign spending.

Come to think of it, it must be that the media benefits much more than politicians or donors from the current financing system. Why is that? A campaign is a zero-sum game. Financially backing a candidate is taking a risk on their loss, and in a tight race one is likely to face fierce competition from other donors. But the outlets that candidates compete over for airtime and the consultants who have “mastered” the political system get to absorb all that funding without needing any particular stake in the outcome of the election. (Once in office, can a politician afford to upset the media?)

Who else benefits from campaign spending? Maybe the telecom industry, since all the political messaging has to run over it.

Maybe this analysis has something to do with why generating political momentum around campaign finance reform is a grueling uphill battle. Because the more centralized and powerful a media outlet, the more it has to gain from expensive campaign battling. It can play gatekeeper and sell passage to the highest bidder.

Taking it one step farther: since the media, through its selection of news items, can heavily influence voters’ perception of candidates, it is in their power to calibrate their news in a way that necessitates further spending by candidates.

Suppose a candidate is popular enough to win an election by a landslide. It would be in the interests of media outlets to start portraying that candidate badly, highlighting their gaffes or declaring them to be weak or whatever else, to force the candidate to spend money on advertising to reshape the public perception of them.

What a racket.

dreams of reason

Begin insomniac academic blogging:

Dave Lester has explained his strategy in graduate school as “living agily”, a reference to agile software development.

In trying to navigate the academic world, I find myself sniffing the air through conversations, email exchanges, tweets. Since this feels like part of my full time job, I have been approaching the task with gusto and believe I am learning rapidly.

Intellectual fashions shift quickly. A year ago I first heard the term “digital humanities”. At the time, it appeared to be controversial but on the rise. Now, it seems like something people are either disillusioned with or pissed about. (What’s this based on? A couple conversations this week, a few tweets. Is that sufficient grounds to reify a ‘trend’?)

I have no dog in that race yet. I can’t claim to understand what “digital humanities” means. But from what I gather, it represents a serious attempt to approach text in its quantitative/qualitative duality.

It seems that such a research program would: (a) fall short of traditional humanities methods at first, due to the primitive nature of the tools available, (b) become more insightful as the tools develop, and so (c) be both disgusting and threatening to humanities scholars who would prefer that their industry not be disrupted.

I was reminded through an exchange with some Facebook Marxists that Hegel wrote about the relationship between the quantitative and the qualitative. I forget if quantity was a moment in transition to quality, or the other way around, or if they bear some mutual relationship, for Hegel.

I’m both exhausted about and excited that in order to understand the evolution of the environment I’m in, and make strategic choices about how to apply myself, I have to (re?)read some Hegel. I believe the relevant sections are this and this from his Science of Logic.

This just in! Information about why people are outraged by digital humanities!

There we have it. Confirmation that outrage at digital humanities is against the funding of research based on the assumption that “that formal characteristics of a text may also be of importance in calling a fictional text literary or non-literary, and good or bad”–i.e., that some aspects of literary quality may be reducible to quantitative properties of the text.

A lot of progress has been made in psychology by assuming that psychological properties–manifestly qualitative–supervene on quantitatively articulated properties of physical reality. The study of neurocomputation, for example, depends on this. This leads to all sorts of cool new technology, like prosthetic limbs and hearing aids and combat drones controlled by dreaming children (potentially).

So, is it safe to say that if you’re against digital humanities, you are against the unremitting march of technical progress? I suppose I could see why one would be, but I think that’s something we have to take a gamble on, steering it as we go.

In related news, I am getting a lot out of my course on statistical learning theory. Looking up something I wanted to include in this post just now about what I’ve been learning, I found this funny picture:

One thing that’s great about this picture is how it makes explicit how, in a model of the mind adopted by statistical cognitive science theorists, The World is understood by us through a mentally internal Estimator whose parameters are strictly speaking quantitative. They are quantitative because they are posited to instantiate certain algorithms, such as those derived by statistical learning theorists. These algorithmic functions presumably supervene on a neurocomputational substrate.

But that’s a digression. What I wanted to say is how exciting belief propagation algorithms for computing marginal probabilities on probabilistic graphical models are!

What’s exciting about them is the promise they hold for the convergence of opinion onto correct belief based on a simple algorithm. Each node in a network of variables listens to all of its neighbors. Occasionally (on a schedule whose parameters are free for optimization to context) the node will synthesize the state of all of its neighbors except one, then push that “message” to its neighbor, who is listening…

…and so on, recursively. This algorithm has nice mathematically guaranteed convergence properties when the underlying graph has no cycles. Meaning, the algorithm finds the truth about the marginal probabilities of the nodes in a guaranteed amount of time.

It also has some nice empirically determined properties when the underlying graph has cycles.

The metaphor is loose, at this point. If I could dream my thesis into being at this moment, it would be a theoretical reduction of discourse on the internet (as a special case of discourse in general) to belief propagation on probabilistic graphical models. Ideally, it would have to account for adversarial agents within the system (i.e. it would have to be analyzed for its security properties), and support design recommendations for technology that catalyzed the process.

I think it’s possible. Not done alone, of course, but what projects are ever really undertaken alone?

Would it be good for the world? I’m not sure. Maybe if done right.

several words, all in a row, some about numbers

I am getting increasingly bewildered by number of different paradigms available in academic research. Naively, I had thought I had a pretty good handle on this sort of thing coming into it. After trying to tackle the subject head on this semester, I feel like my head will explode.

I’m going to try to break down the options.

  • Nobody likes positivism, which went out of style when Wittgenstein refuted his own Tractatus.
  • Postpositivists say, “Sure, there isn’t really observer-independent inquiry, but we can still approximate that through rigorous methods.” The goal is an accurate description of the subject matter. I suppose this fits into a vision of science being about prediction and control of the environment, so generalizability of results would be considered important. I’d argue that this is also consistent with American pragmatism. I think “postpositivist” is a terrible name and would rather talk/think about pragmatism.
  • Interpretivism, which seems to be a more fashionable term than antipositivism, is associated with Weber and Frankfurt school thinkers, as well as a feminist critique. The goal is for one reader (or scholarly community?) to understand another. “Understanding” here is understood intersubjectively–“I get you”. Interpretivists are skeptical of prediction and control as provided by a causal understanding. At times, this skepticism is expressed as a belief that causal understanding (of people) is impossible; other times it is expressed as a belief that causal understanding is nefarious.

Both teams share a common intellectual ancestor in Immanuel Kant, who few people bother to read.

Habermas has room in his overarching theory for multiple kinds of inquiry–technical, intersubjective, and emancipatory/dramaturgical–but winds up getting mobilized by the interpretivists. I suspect this is the case because research aimed at prediction and control is better funded, because it is more instrumental to power. And if you’ve got funding there’s little incentive to look to Habermas for validation.

It’s worth noting that mathematicians still basically run their own game. You can’t beat pure reason at the research game. Much computer science research falls into this category. Pragmatists will take advantage of mathematical reasoning. I think interpretivists find mathematics a bit threatening because it seems like the only way to “interpet” mathematicians is by learning the math that they are talking about. When intersubjective understanding requires understanding verbatim, that suggests the subject matter is more objectively true than not.

The gradual expansion of computer science towards the social science through “big data” analysis can be seen as a gradual expansion of what can be considered under mathematical closure.

Physicists still want to mathematize their descriptions of the universe. Some psychologists want to mathematize their descriptions. Some political scientists, sociologists, etc. want to mathematize their descriptions. Anthropologists don’t want to mathematize their descriptions. Mathematization is at the heart of the quantitative/qualitative dispute.

It’s worth noting that there are non-mathematized predictive theories, as well as mathematized theories that pretty much fail to predict anything.

digital qualities: some meditations on methodology

Text is a kind of data that is both qualitative (interpretable for the qualities it conveys) and qualitative (characterized by certain amounts of certain abstract tokens arranged in a specific order).

Statistical learning techniques are able to extract qualitative distinctions from quantitative data, through clustering processes for example. Non-parametric statistical methods allow qualitative distinctions to be extracted from quantitative data without specifying particular structure or features up front.

Many cognitive scientists and computational neuroscientists believe that this is more or less how perception works. The neurons in our eyes (for example) provide a certain kind of data to downstream neurons, which activate according to quantifiable regularities in neuron activation. A qualitative difference that we perceive is due to a statistical aggregation of these inputs in the context of a prior, physically definite, field of neural connectivity.

A source of debate in the social sciences is the relationship between qualitative and quantitative research methods. As heirs to the methods of harder sciences whose success is indubitable, quantitative research is often assumed to be credible up to the profound limits of its method. A significant amount of ink has been spilled distinguishing qualitative research from quantitative research and justifying it in the face of skeptical quantitative types.

Qualitative researchers, as a rule, work with text. This is trivially true due to the fact that a limiting condition of qualitative research appears to be the creation of a document explicating the research conclusions. But if we are to believe several instructional manuals on qualitative research, then the work of an e.g. ethnographer involves jottings, field notes, interview transcripts, media transcripts, coding of notes, axial coding of notes, theoretical coding of notes, or, more broadly, the noting of narratives (often written down), the interpreting of text, a hermeneutic exposition of hermeneutic expositions ad infinitum down an endless semiotic staircase.

Computer assisted qualitative data analysis software passes the Wikipedia test for “does it exist”.

Data processed by computers is necessarily quantitative. Hence, qualitative data is necessarily quantitative. This is unsurprising, since so much qualitative data is text. (See above).

We might ask: what makes the work qualitative researchers do qualitative as opposed to quantitative, if the data they work with with quantitative? We could answer: it’s their conclusions that are qualitative.

But so are the conclusions of a quantitative researcher. A hypothesis is, generally speaking, a qualitative assessment, that is then operationalized into a prediction whose correspondence with data can be captured quantitatively through a statistical model. The statistical apparatus is meant to guide our expectations of the generalizability of results.

Maybe the qualitative researcher isn’t trying to get generalized results. Maybe they are just reporting a specific instance. Maybe generalizations are up to the individual interpreter. Maybe social scientific research can only apply and elaborate on an ideal type, tell a good story. All further insight is beyond the purview of the social sciences.

Hey, I don’t mean to be insensitive about this, but I’ve got two practical considerations: first, do you expect anyone to pay you for research that is literally ungeneralizable? That has no predictive or informative impact on the future? Second, if you believe that, aren’t you basically giving up all ground on social prediction to economists? Do you really want that?

Then there’s the mixed methods researcher. Or, the researcher who in principle admits that mixed methods are possible. Sure, the quantitative folks are cool. We’d just rather be interviewing people because we don’t like math.

That’s alright. Math isn’t for everybody. It would be nice if computers did it for us. (See above)

What some people say is: qualitative research generates hypotheses, quantitative research tests hypotheses.

Listen: that is totally buying into the hegemony of quantitative methods by relegating qualitative methods to an auxiliary role with no authority.

Let’s accept that hegemony as an assumption for a second, just to see where it goes. All authority comes from a quantitatively supported judgment. This includes the assessment of the qualitative researcher.

We might ask, “Where are the missing scientists?” about qualitative research, if it is to have any authority at all, even in its auxiliary role.

What would Bruno Latour do?

We could locate the missing scientists in the technological artifacts that qualitative researchers engage with. The missing scientists may lie within the computer assisted qualitative data analysis software, which dutifully treats qualitative data as numbers, and tests the data experimentally and in a controlled way. The user interface is the software’s experimental instrument, through which it elicits “qualitative” judgments from its users. Of course, to the software, the qualitative judgments are quantitative data about the cognitive systems of the software’s users, black boxes that nevertheless have a mysterious regularity to them. The better the coding of the qualitative data, the better the mysteries of the black box users are consolidated into regularities. From the perspective of the computer assisted qualitative data analysis software, the whole world, including its users, is quantitative. By delegating quantitative effort to this software, we conserve the total mass of science in the universe. The missing mass is in the software. Or, maybe, in the visual system of the qualitative researchers, which performs non-parametric statistical inference on the available sensory data as delivered by photo-transmitters in the eye.

I’m sorry. I have to stop. Did you enjoy that? Did my Latourian analysis convince you of the primacy or at least irreducibility of the quantitative element within the social sciences?

I have a confession. Everything I’ve ever read by Latour smells like bullshit to me. If writing that here and now means I will never be employed in a university, then may God have mercy on the soul of academe, because its mind is rotten and its body dissolute. He is obviously a brilliant man but as far as I can tell nothing he writes is true. That said, if you are inclined to disagree, I challenge you to refute my Latourian analysis above, else weep before the might of quantification, which will forever dominate the process of inquiry, if not in man, then in our robot overlords and the unconscious neurological processes that prefigure them.

This is all absurd, of course. Simultaneously accepting the hegemony of quantitative methods
and Latourian analysis has provided us with a reductio ad absurdum that compels us to negate some assumptions. If we discard Latourian analysis, then our quantitative “hegemony” dissolves as more and more quantitative work is performed by unthinking technology. All research becomes qualitative, a scholarly consideration of the poetry outputted by our software and instruments.

Nope, that’s not it either. Because somebody is building that software and those instruments, and that requires generalizability of knowledge, which so far qualitative methods have given up a precise claim to.

I’m going to skip some steps and cut to the chase:

I think the quantitative/qualitative distinction in social scientific research, and in research in general, is dumb.

I think researchers should recognize the fungibility of quantity and quality in text and other kinds of data. I think ethnographers and statistical learning theorists should warmly embrace each other and experience the bliss that is finding ones complement.

Goodnight.

Web “Atrocity” Tourism

Just learned from Joe Paul about a web site that used to exist called Portal of Evil. This site specialized in “web atrocity tourism”–linking to and commenting on the weirdest and scariest corners of the internet.

The most comprehensive account I can find of the site is on a Furry community wiki. It was apparently not primarily a troll community, but a hobby sport for tourists aware of their impact on the ecosystem:

Contrary to popular belief, the great majority of PoE members did not organize attacks on the websites listed on PoE. Rule #1 of the PoE forum rules, also known as the Prime Directive (or PD for short), stated: “Do not contact the site listed. Do not email the site owner. Do not post in their guestbooks. Do not post in their forums. If the site owners want to interact with visitors from PoE they can come to these forums.” PoE members were to keep their criticisms of the site to the PoE forums, even if the site owners posted to PoE of their own accord.

The site was taken down last year after five years of dormancy. One user laments that archives of the site have been actively discouraged, pointing out that that’s a shame, since it was a valuable trove of research and curation of the early web.

Which is totally right and totally frustrating. But apparently digital records of it still exist and there are spin-off communities. People can be tracked down and interviewed. So, all is not lost.

Fundamentally, it’s a really beautiful thing about the Internet that assists people who want to build communities around the ways that they are strange or perhaps crazy. Sure, a tiny percentage of it is morally objectionable, but most of it is just really funny.

And, think about it: today’s weirdos on the Internet are tomorrow’s identity groups asking for civil social treatment. While these groups will no doubt have plenty of their own records of their early history, contemporaneous ‘objective’ or critical perspectives will be important to them and maybe others as well. They are probably demanding civil treatment as I write this, only their struggle for social equality is so obscure that I have never had a reason to care.

Like a lot of white guys who were picked on sometimes as kids, I’ve got an abstract kind of solidarity with those folks that isn’t rooted in much personal experience of oppression. Call it compassion, if you want. I’m going to go ahead and let that be my dominant attitude on the matter until I hear about the specifics of some of those communities and maybe become revolted out of some taboo mentality or, more likely, bored. You should get on board with this if you aren’t already. It costs you nothing and can give you something to talk about, like Great Ape Personhood.

Field notes and PSA: Weird Twitter

For some time I’ve been following an emerging subculture on Twitter. I have referred to it occasionally as “stoner Twitter poets”, but as it attains consciousness of itself as a phenomenon, it has given itself a name: weird twitter.

Weird twitter posts tend to be of the following forms:

  • A brutally sincere statement of personal perspective, often with philosophical and spiritual sentiment, but just as often profane
  • nounal phrases referring to surreal compositions of objects
  • “sext:” followed by a declaration of attraction that is often only peripherally erotic (to humorous effect)
  • norm-building posts on appropriate twitter behavior, the state of weird twitter, and discussion of ‘favstars’

Interestingly, while Facebook “Likes” are generally derided, the “weird twitter” community holds the “favstar” in high esteem, as are pyramids and bots (especially ‘spam’ bots, like Horse Ebooks, which are used as source material for aleatoric poetry). And, though romance and attraction are frequently discuessed, “weird twitter” discourse is almost completely devoid of explicit sexual content, despite its otherwise transgressive qualities.