Digifesto

Theorizing the Web and SciPy conferences compared

I’ve just been through two days of tutorials at SciPy 2014–that stands for Scientific Python (the programming language). The last conference I went to was Theorizing the Web 2014. I wonder if I’m the first person to ever go to both conferences. Since I see my purpose in grad school as being a bridge node, I think it’s worthwhile to write something comparing the two.

Theorizing the Web was held in a “gorgeous warehouse space” in Williamsburg, the neighborhood of Brooklyn, New York that was full of hipsters ten years ago and now is full of baby carriages but still has gorgeous warehouse spaces and loft apartments. The warehouse spaces are actually gallery spaces that only look like warehouses from the outside. On the inside of the one where TtW was held, whole rooms with rounded interior corners were painted white, perhaps for a photo shoot. To call it a “warehouse” is to appeal to the blue color and industrial origins that Brooklyn gentrifiers appeal to in order to distinguish themselves from the elites in Manhattan. During my visit to New York for the conference, I crashed on a friend’s air mattress in the Brooklyn neighborhood I had been gentrifying just a few years earlier. The speakers included empirical scientific researchers, but these were not the focus of the event. Rather, the emphasis was on theorizing in a way that is accessible to the public. The most anticipated speaker was a porn actress. Others were artists or writers of one sort or another. One was a sex worker who then wrote a book. Others were professors of sociology and communications. Another was a Buzzfeed editor.

SciPy is taking place in the AT&T Education and Conference Center in Austin, Texas, near the UT Austin campus. I’m writing from the adjoining hotel. The conference rooms we are using are in the basement; they seat many in comfortable mesh rolling chairs on tiers so everybody can see the dual projector screens. The attendees are primarily scientists who do computationally intensive work. One is a former marine biologist who now does bioinformatics mainly. Another team does robotics. Another does image processing on electron microscope of chromosomes. They are not trying to be accessible to the public. What they are trying to teach is hard enough to get across to others with similar expertise. It is a small community trying to enlarge itself by teaching others its skills.

At Theorizing the Web, the rare technologist spoke up to talk about the dangers of drones. In the same panel, it was pointed out how the people designing medical supply drones for use in foreign conflict zones were considering coloring them white, not black, to make them less intimidating. The implication was that drone designers are racist.

It’s true that the vast majority of attendees of the conference are white and male. To some extent, this is generational. Both tutorials I attended today–including the one one on software for modeling multi-body dynamics, useful for designing things like walking robots–were interracial and taught by guys around my age. The audience has some older folks. These are not necessarily academics, but may be industry types or engineers whose firms are paying them to attend to train on cutting edge technology.

The afterparty first night of Theorizing the Web was in a dive bar in Williamsburg. Brooklyn’s Williamsburg has dive bars the same way Virginia’s Williamsburg has a colonial village–they are a cherished part of its cultural heritage. But the venue was alienating for some. One woman from abroad confided to me that they were intimidated by how cool the bar felt. It was my duty as an American and a former New Yorker to explain that Williamsburg stopped being cool a long time ago.

I’m an introvert and am initially uneasy in basically any social setting. Tonight’s SciPy afterparty was in the downtown office of Enthought, in the Bank of America building. Enthought’s digs are on the 21st floor, with spatious personal offices and lots of whiteboards which display serious use. As an open source product/consulting/training company, it appears to be doing quite well. I imagine really cool people would find it rather banal.

I don’t think it’s overstating things to say that Theorizing the Web serves mainly those skeptical of the scientific project. Knowledge is conceived of as a threat to the known. One panelist at TtW described the problem of “explainer” sites–web sites whose purpose is to explain things that are going on to people who don’t understand them–when they try to translate cultural phenomena that they don’t understand. It was argued that even in cases where these cultural events are public, to capture that content and provide a interpretation or narration around it can be exploitative. Later, Kate Crawford, a very distinguished scholar on civic media, spoke to a rapt audience about the “conjoint anxieties” of Big Data. The anxieties of the watched are matched by the anxieties of the watchmen–like the NSA and, more implicitly, Facebook–who must always seek out more data in order to know things. The implication is that their political or economic agenda is due to a psychological complex–damning if true. In a brilliant rhetorical move that I didn’t quite follow, she tied this in to normcore, which I’m pretty sure is an Internet meme about a fake “fashion” trend in New York. Young people in New York go gaga for irony like this. For some reason earlier this year hipsters ironically wearing unstylish clothing became notable again.

I once met somebody from L.A. who told me their opinion of Brooklyn was that all nerds gathered in one place and thought they could decide what cool was just by saying so. At the time I had only recently moved to Berkeley and was still adjusting. Now I realize how parochial that zeitgeist is, however much I may still identify with it some.

Back in Austin, I have interesting conversations with folks at the SciPy party. One conversation is with two social scientists (demographic observation: one man, one woman) from New York that work on statistical analysis of violent crime in service to the city. They talk about the difficulty of remaining detached from their research subjects, who are eager to assist with the research somehow, though this would violate the statistical rigor of their study. Since they are doing policy research, objectivity is important. They are painfully aware of the limitations of their methods and the implications this has on those their work serves.

Later, I’m sitting alone when I’m joined by an electrical engineer turned programmer. He’s from Tennessee. We talk shop for a bit but the conversation quickly turns philosophical–about the experience of doing certain kinds of science, the role of rationality in human ethics, whether religion is an evolved human impulse and whether that mattes. We are joined by a bioinformatics researcher from Paris. She tells us later that she has an applied math/machine learning background.

The problem in her field, she explains, is that for rare diseases it is very hard to find genetic causes because there isn’t enough data to do significant inference. Genomic data is very highly dimensional–thousands of genes–and for some diseases there may be less than fifty cases to study. Machine learning researchers are doing their best to figure out ways for researchers to incorporate “prior knowledge”–theoretical understanding from beyond the data available–to improve their conclusions.

Over meals the past couple days I’ve been checking Twitter, where a lot of the intellectuals who organize Theorizing the Web or are otherwise prominent in that community are active. One conversation extended conversation is about the relative failure of the open source movement to produce compelling consumer products. My theory is that this has to do with business models and the difficulty of coming up with upfront capital investment. But emotionally my response to that question is that it is misplaced: consumer products are trivial. Who cares?

Today, folks on Twitter are getting excited about using Adorno’s concept of the culture industry to critique Facebook’s emotional contagion experiment and other media manipulation. I find this both encouraging–it’s about time the Theorizing the Web community learned to embrace Frankfurt School thought–and baffling, because I believe they are misreading Adorno. The culture industry is that sector of the economy that produces cultural products, like Hollywood and television productions companies. On the Internet, the culture industry is Buzzfeed, the Atlantic, and to a lesser extent (though this is surely masked by it’s own ideology) The New Inquiry. My honest opinion for a long time has been that the brand of “anticapitalist” criticality indulged in on-line is a politically impotent form of entertainment equivalent to the soap opera. A concept more appropriate for understanding Facebook’s role in controlling access to news and the formation of culture is Habermas’ idea of steering media.

He gets into this in Theory of Communicative Action, vol. 2, which is underrated in America probably due to its heaviness.

economic theory and intellectual property

I’ve started reading Picketty’s Capital. His introduction begins with an overview of the history of economic theory, starting with Ricardo and Marx.

Both these early theorists predicted the concentration of wealth into the hands of the owners of factors of production that are not labor. For Ricardo, land owners extract rents and dominate the economy. For Marx, capitalists–owners of private capital–accumulate capital and dominate the economy.

Since those of us with an eye on the tech sector are aware of a concentration of wealth in the hands of the owners of intellectual property, it’s a good question what kind of economic theory ought to apply to those cases.

One one sense, intellectual property is a kind of capital. It is a factor of production that is made through human labor.

On the other hand, we talk about ideas being ‘discovered’ like land is discovered, and we imagine that intellectual property can in principle be ‘shared’ like a ‘commons’. If we see intellectual property as a position in a space of ideas, it is not hard to think of it like land.

Like land, a piece of intellectual property is unique and gains in value due to further improvements–applications or innovations–built upon it. In a world where intellectual property ownership never expires and isn’t shared, you can imagine that whoever hold some critical early work in some field could extract rents for perpetuity. Owning a patent would be like owning a land estate.

Like capital, intellectual property is produced by workers and often owned by those investing in the workers with pre-existing capital. The produced capital is then owned by the initiating capitalist, and accumulates.

Open source software is an important exception to this pattern. This kind of intellectual property is unalienated from those that produce it.

Preparing for SciPy 2014

I’ve been instructed to focus my attention on mid-level concepts rather than grand theory as I begin my empirical work.is

This is difficult for me, as I tend to oscillate between thinking very big and thinking very narrowly. This is an occupational hazard of a developer. Technical minutiae accumulate into something durable and powerful. To sustain ones motivation one has to be able to envision ones tiny tasks (correcting the spelling of some word in a program) stepping towards a larger project.

I’m working in my comfort zone. I’ve got my software project open on GitHub and I’m preparing to present my preliminary results at SciPy 2014 next week. A colleague and mentor I met with today told me it’s not a conference for people marking up career points. It’s a conference for people to meet each other, get an update on how their community is doing as a whole, and to learn new skills from each other.

It’s been a few years since I’ve been to a developer conference. In my past career I went to FOSS4G, the open source geospatial conference, a number of times. In 2008, the conference was in South Africa. I didn’t know anybody, so I blogged about it, and got chastised for being too divisive. I wasn’t being sensitive to the delicate balance between the open source developer geospatial community and their greatest proprietary coopetitor, ESRI. I was being an ideologue at a time when the open source model was in that industry just in its inflection point and becoming mainstream. Obviously I didn’t understand the subtlety of the relationships, business and personal, threaded through the conference.

Later I attended FOSS4G in 2010 to pitch the project my team had recently launched, GeoNode. It was a very exciting time for me personally. I was very personally invested in the project, and I was so proud of my team and myself for pulling through on the beta release. In retrospect, building a system for serving spatial data modeled on a content management system seems like a no-brainer. Today there are plenty of data management startups and services out there, some industrial, some academic. But at the time we were ahead of the curve, thanks largely to the vision of Chris Holmes, who at the time the wunderkind visionary president of OpenGeo.

Cholmes always envisioned OpenGeo turning into an anti-capitalist organization, a hacker coop with as much transparency as it could handle. If only it could get its business model right. It was incubating in a pre-crash bubble that thinned out over time. I was very into the politics of the organization when I joined it, but over time I became more cynical and embraced the economic logic I was being taught by the mature entrepreneurs who had been attracted to OpenGeo’s promise and standing in the geospatial world. While trying to wrap my head around managing developers, clients, and the budget around GeoNode, I began to see why businesses are the way they are, and how open source plays out in the industrial organization of the tech industry as a whole.

GeoNode, the project, remains a success. There is glory to that, though in retrospect I can claim little of it. I made many big mistakes and the success of the project has always been due to the very intelligent team working on it, as well as its institutional positioning.

I left OpenGeo because I wanted to be a scientist. I had spent four years there, and had found my way onto a project where we were building data plumbing for disaster reduction scientists and the military. OpenGeo had become a victim of its own success and outgrown its non-profit incubator, buckling under the weight of the demand for its services. I had deferred enrollment at Berkeley for a year to see GeoNode through to a place where it couldn’t get canned. My last major act was to raise funding for a v1.1 release that fixed the show-stopping bugs in the v1.0 version.

OpenGeo is now Boundless, a for-profit company. It’s better that way. It’s still doing revolutionary work.

I’ve been under the radar in the open source world for the three years I’ve been in grad school. But as I begin this dissertation work, I feel myself coming back to it. My research questions, in one framing, are about software ecosystem sustainability and management. I’m drawing from my experience participating in and growing open source communities and am trying to operationalize my intuitions from that work. At Berkeley I’ve discovered the scientific Python community, which I feel at home with since I learned about how to do open source from the inimitable Whit Morris, a Pythonista of the Plone cohort, among others.

After immersing myself in academia, I’m excited to get back into the open source development world. Some of the most intelligent and genuine people I’ve ever met work in that space. Like the sciences, it is a community of very smart and creative people with the privilege to pursue opportunity but with goals that go beyond narrow commercial interests. But it’s also in many ways a more richly collaborative and constructive community than the academic world. It’s not a prestige economy, where people are rewarded with scarce attention and even scarcer titles. It’s a constructive economy, where there is always room to contribute usefully, and to be recognized even in a small way for that contribution.

I’m going to introduce my research on the SciPy communities themselves. In the wake of the backlash against Facebook’s “manipulative” data science research, I’m relieved to be studying a community that has from the beginning wanted to be open about its processes. My hope is that my data scientific work will be a contribution to, not an exploitation of, the community I’m studying. It’s an exciting opportunity that I’ve been preparing for for a long time.

metaphorical problems with logical solutions

There are polarizing discourses on the Internet about the following four dichotomies:

  • Public vs. Private (information)
  • (Social) Inclusivity vs. Exclusivity.
  • Open vs. Closed (systems, properties, communities).

Each of these pairings enlists certain metaphors and intuitions. Rarely are they precisely defined.

Due to their intuitive pull, it’s easy to draw certain naive associations. I certainly do. But how do they work together logically?

To what extent can we fill in other octants of this cube? Or is that way of modeling it too simplistic as well?

If privacy is about having contextual control over information flowing out of oneself, then that means that somebody must have the option of closing off some access to their information. To close off access is necessarily to exclude.

PRIVATE => ¬OPEN => ¬INCLUSIVE

But it has been argued that open sociotechnical systems exclude as well by being inhospitable to those with greater need for privacy.

OPEN => ¬PRIVATE => ¬INCLUSIVE

These conditionals limit the kinds of communities that can exist.

PRIVATE OPEN INCLUSIVE POSSIBLE?
T T T F
T T F F
T F T F
T F F T
F T T F
F T F T
F F T F
F F F T

Social inclusivity in sociotechnical systems is impossible. There is no such thing as a sociotechnical system that works for everybody.

There are only three kinds of systems: open systems, private systems, or systems that are neither open nor private. We can call the latter leaky systems.

These binary logical relations capture only the limiting properties of these systems. If there has ever been an open system, it is the Internet; but everyone knows that even the Internet isn’t truly open because of access issues.

The difference between a private system and a leaky system is participant’s ability to control how their data escapes the system.

But in this case, systems that we call ‘open’ are often private systems, since participants choose whether or not to put information into the open.

So is the only question whether and when information is disclosed vs. leaked?

Protected: some ruminations regarding ‘openness’

This content is password protected. To view it please enter your password below:

turns out network backbone markets in the US are competitive after all

I’ve been depressed lately about the oligopolistic control of telecommunications for a while now. There’s the Web We’ve Lost; there’s Snowden leaks; there’s the end of net neutrality. I’ll admit a lot of my moodiness about this has been just that–moodiness. But it was moodiness tied to a particular narrative.

In this narrative, power is transmitted via flows of information. Media is, if not determinative of public opinion, determinative of how that opinion is acted up. Surveillance is also an information flow. Broadly, mid-20th century telecommunications enabled mass culture due to the uniformity of media. The Internet’s protocols allowed it to support a different kind of culture–a more participatory one. But monetization and consolidation of the infrastructure has resulted in a society that’s fragmented but more tightly controlled.

There is still hope of counteracting that trend at the software/application layer, which is part of the reason why I’m doing research on open source software production. One of my colleagues, Nick Doty, studies the governance of Internet Standards, which is another piece of the puzzle.

But if the networking infrastructure itself is centrally controlled, then all bets are off. Democracy, in the sense of decentralized power with checks and balances, would be undermined.

Yesterday I learned something new from Ashwin Mathew, another colleague who studies Internet governance at the level of network administration. The man is deep in the process of finishing up his dissertation, but he looked up from his laptop for long enough to tell me that the network backbone market is in fact highly competitive at the moment. Apparently, there was a lot of dark fiberoptic cable (“dark fiber“–meaning, no light’s going through it) laid during the first dot-com boom, which has been laying fallow and getting bought up by many different companies. Since there are many routes from A to B and excess capacity, this market is highly competitive.

Phew! So why the perception of oligopolistic control of networks? Because the consumer-facing telecom end-points ARE an oligopoly. Here there’s the last-mile problem. When wire has to be laid to every house, the economies of scale are such that it’s hard to have competitive markets. Enter Comcast etc.

I can rest easier now, because I think that this means there’s various engineering solutions to this (like AirJaldi networks? though I think those still aren’t last mile…; mesh networks?) as well as political solutions (like a local government running its last mile network as a public utility).

Protected:

This content is password protected. To view it please enter your password below:

i’ve started working on my dissertation // diversity in open source // reflexive data science

I’m studying software development and not social media for my dissertation.

That’s a bit of a false dichotomy. Much software development happens through social media.

Which is really the point–that software development is a computer mediated social process.

What’s neat is that it’s a computer mediated social process that, at its best, creates the conditions for it to continue as a social process. c.f. Kelty’s “recursive public”

What’s also neat is that this is a significant kind of labor that is not easy to think about given the tools of neoclassical economics or anything else really.

In particular I’m focusing on the development of scientific software, i.e. software that’s made and used to improve our scientific understanding of the natural world and each other.

The data I’m looking at is communications data between developers and their users. I’m including the code, under version control, as this. In addition to being communication between developers, you might think of source code as a communication between developers and machines. The process of writing code as a collaboration or conversation between people and machines.

There is a lot of this data so I get to use computational techniques to examine it. “Data science,” if you like.

But it’s also legible, readable data with readily accessible human narrative behind it. As I debug my code, I am reading the messages sent ten years ago on a mailing list. Characters begin to emerge serendipitously because their email signatures break my archive parser. I find myself Googling them. “Who is that person?”

One email I found while debugging stood out because it was written, evidently, by a woman. Given the current press on diversity in tech, I thought it was an interesting example from 2001:

From sag at hydrosphere.com Thu Nov 29 15:21:04 2001
From: sag at hydrosphere.com (Sue Giller)
Date: Thu Nov 29 15:21:04 2001
Subject: [Numpy-discussion] Re: Using Reduce with Multi-dimensional Masked array
In-Reply-To: <000201c17917$ac5efec0$3d01a8c0@plstn1.sfba.home.com>
References: <20011129174809062.AAA210@mail.climatedata.com@SUEW2000>
Message-ID: <20011129232011546.AAA269@mail.climatedata.com@SUEW2000>

Paul,

Well, you’re right. I did misunderstand your reply, as well as what
the various functions were supposed to do. I was mis-using the
sum, minimum, maximum as tho they were MA..reduce, and
my test case didn’t point out the difference. I should always have
been doing the .reduce version.

I apologize for this!

I found a section on page 45 of the Numerical Python text (PDF
form, July 13, 2001) that defines sum as
‘The sum function is a synonym for the reduce method of the add
ufunc. It returns the sum of all the elements in the sequence given
along the specified axis (first axis by default).’

This is where I would expect to see a caveat about it not retaining
any mask-edness.

I was misussing the MA.minimum and MA.maximum as tho they
were .reduce version. My bad.

The MA.average does produce a masked array, but it has changed
the ‘missing value’ to fill_value=[ 1.00000002e+020,]). I do find this
a bit odd, since the other reductions didn’t change the fill value.

Anyway, I can now get the stats I want in a format I want, and I
understand better the various functions for array/masked array.

Thanks for the comments/input.

sue

I am trying to approach this project as a quantitative scientist. But the process of developing the software for analysis is putting me in conversation not just with the laptop I run the software on, but also the data. The data is a quantified representation–I count the number of lines, even the number of characters in a line as I construct the regular expression needed to parse the headers properly–but it represents a conversation in the past. As I write the software, I consult documentation written through a process not unlike the one I am examining, as well as Stack Overflow posts written by others who have tried to perform similar tasks. And now I am writing a blog post about this work. I will tweet a link of this out to my followers; I know some people from the Scientific Python community that I am studying follow me on Twitter. Will one of them catch wind of this post? What will they think of it?

autocatalysis sustains autopoeisis

Why we need good computational models of peace and love

“Data science” doesn’t refer to any particular technique.

It refers to the cusp of the diffusion of computational methods from computer science, statistics, and applied math (the “methodologists”) to other domains.

The background theory of these disciplines–whose origin we can trace at least as far back at cybernetics research in the 1940’s–is required to understand the validity of these “data science” technologies as scientific instruments, just as a theory of optics is necessary to know the validity of what is seen through a microscope. Kuhn calls these kinds of theoretical commitments “instrumental commitments.”

For most domain sciences, instrumental commitment to information theory, computer science, etc. is not problematic. It is more so with some social sciences which oppose the validity of totalizing physics or formalism.

There aren’t a lot of them left because our mobile phones more or less instrumentally commit us to the cybernetic worldview. Where there is room for alternative metaphysics, it is because of the complexity of emergent/functional properties of the cybernetic substrate. Brier’s Cybersemiotics is one formulation of how richer communicative meaning can be seen as a evolved structure on top of cybernetic information processing.

If “software is eating the world” and we don’t want it to eat us (metaphorically! I don’t think the robots are going to kill us–I think that corporations are going to build robots that make our lives miserable by accident), then we are going to need to have software that understands us. That requires building out cybernetic models of human communication to be more understanding of our social reality and what’s desirable in it.

That’s going to require cooperation between techies and humanists in a way that will be trying for both sides but worth the effort I think.

Follow

Get every new post delivered to your Inbox.

Join 899 other followers