Tag: search engines

Data isn’t labor because using search engines is really easy

A theme I’ve heard raised in a couple places recently, including Ibarra et al. “Should We Treat Data As Labor?” and the AI Now 2018 Report, is that there is something wrong with how “data”, particularly data “produced” by people on the web, is conceptualized as part of the economy. Creating data, the argument goes, requires labor. And as the product of labor, it should be protected according to the values and practices of labor movements in the past. In particular, the current uses of data in, say, targeted advertising, social media, and search, are exploitative; the idea that consumers ‘pay’ for these services with their data is misleading and ultimately unfair to the consumer. Somehow the value created by the data should be reapportioned back to the user.

This is a sexy and popular argument among a certain subset of intellectuals who care about these things. I believe the core emotional appeal of this proposal is this: It is well known that a few well-known search engine and social media companies, namely Google and Facebook, are rich. If the value added by user data were in part returned to the users, the users, who are compared to Google and Facebook not rich, would get something they otherwise would not get. I.e., the benefits for recognizing the labor involved in creating data is redistribution of surplus to The Rest of Us.

I don’t have a problem personally with that redistributive impulse. However, I don’t think the “data is labor” argument actually makes much sense.

Why not? Well, let’s take the example of a search engine. Here is the transaction between a user and a search engine:

  • Alice types a query, “avocado toast recipes”, into the search engine. This submits data to the company computers.
  • The company computers use that data to generate a list of results that it deems relevant to that query.
  • Alice sees the results, and maybe clicks on one or two of them, if they are good, in the process of navigating to the thing she was looking for in the first place.
  • The search engine records that click as well, in order to better calibrate how to respond to others making that query.

We might forget that the search engine is providing Alice a service and isn’t just a ubiquitous part of the infrastructure we should take for granted. The search engine has provided Alice with relevant search results. What this does is (dramatically) reduce Alice’s search costs; had she tried to find the relevant URL by asking her friends, organically surfing the web, or using the library, who knows what she would have found or how long it would take her. But we would assume that Alice is using the search engine because it gets her more relevant results, faster.

It is not clear how Alice could get this thing she wants without going through the motions of typing and clicking and submitting data. These actions all seem like a bare minimum of what is necessary to conduct this kind of transaction. Similarly, when I got to a grocery store and buy vegetables, I have to get out my credit card and swipe it at the machine. This creates data–the data about my credit card transaction. But I would never advocate for recognizing my hidden labor at the credit card machine is necessary to avoid the exploitation of the credit card companies, who then use that information to go about their business. That would be insane.

Indeed, it is a principle of user interface design that the most compelling user interfaces are those that require the least effort from their users. Using search engines is really, really easy because they are designed that way. The fact that oodles of data are collected from a person without that person exerting much effort may be problematic in a lot of ways. But it’s not problematic because it’s laborious for the user; it is designed and compelling precisely because it is labor-saving. The smart home device industry has taken this even further, building voice-activated products for people who would rather not use their hands to input data. That is, if anything, less labor for the user, but more data and more processing on the automated part of the transaction. That the data is work for the company, and less work for the user, indicates that data is not the same thing as user labor.

There is a version of this argument that brings up feminism. Women’s labor, feminists point out, has long been insufficiently recognized and not properly remunerated. For example, domestic labor traditionally performed by women has been taken for granted, and emotional labor (the work of controlling ones emotions on the job), which has often been feminized, has not been taken seriously enough. This is a problem, and the social cause of recognizing women’s labor and rewarding it is, ceteris paribus, a great thing. But, and I know I’m on dicey ground here, so bear with me, this does not mean that everything that women do that they are not paid to do is unrecognized labor in the sense that is relevant for feminist critiques. Case in point, both men and women use credit cards to buy things, and make telephone calls, and drive vehicles through toll booths, and use search engines, and do any number of things that generate “data”, and in most of these cases it is not remunerated directly; but this lack of remuneration isn’t gendered. I would say, perhaps controversially, that the feminist critique does not actually apply to the general case of user generated data much at all! (Though is may apply in specific cases that I haven’t thought of.)

So in conclusion, data isn’t labor, and labor isn’t data. They are different things. We may want a better, more just, political outcome with respect to the distribution of surplus from the technology economy. But trying to get there through an analogy between data and labor is a kind of incoherent way to go about it. We should come up with a better, different way.

So what’s a better alternative? If the revenue streams of search engines are any indication, then it would seem that users “pay” for search engines through being exposed to advertising. So the “resource” that users are giving up in order to use the search engine is attention, or mental time; hence the term, attention economy.

Framing the user cost of search engines in terms of attention does not easily lend itself to an argument for economic reform. Why? Because search engines are already saving people a lot of that attention by making it so easy to look stuff up. Really the transaction looks like:

  • Alice pays some attention to Gob (the search engine).
  • Gob gives Alice some good search results back in return, and then…
  • Gob passes on some of Alice’s attention through to Bob, the advertiser, in return for money.

So Alice gives up attention but gets back search results and the advertisement. Gob gets money. Bob gets attention. The “data” that matters is not the data transmitted from Alice’s computer up to Gob. Rather, the valuable data is the data that Alice receives through her eyes: of this data, the search results are positively valued, the advertisement is negatively valued, but the value of the bundled good is net positive.

If there is something unjust about this economic situation, it has to be due to the way consumer’s attention is being managed by Gob. Interestingly, those who have studied the value of ‘free’ services in attentional terms have chalked up a substantial consumer surplus due to saved attention (Brynjolfsson and Oh, 2012) This appears to be the perspective of management scientists, who tend to be pro-business, and is not a point repeated often by legal scholars, who tend to be more litigious in outlook. For example, legal scholarship has detailed the view of how attention could be abused through digital market manipulation (Calo, 2013).

Ironically for data-as-labor theorists, the search-engine-as-liberator-of-attention argument could be read as the view that what people get from using search engines is more time, or more ability to do other things with their time. In other words, we would use a search engine instead of some other, more laborious discovery mechanism precisely because it would cost us net negative labor. That absolutely throws a wrench in any argument that the users of search engines should be rewarded on dignity of labor grounds. Instead, what’s happened is that search engines are ubiquitous because consumers have undergone a phase transition in their willingness to work to discover things, and now very happily use search engines which, on the whole, seem like a pretty good deal! (The cost of being-advertised-to is small compared to the benefits of the search results.)

If we start seeing search engines as a compelling labor-saving device rather than a exploiter of laborious clickwork, then some of the disregard consumers have for privacy on search engines becomes more understandable. People are willing to give up their data, even if they would rather not, because search engines are saving them so much time. The privacy harms that come as consequence, then, can be seen as externalities to what is essentially a healthy transaction, rather than a perverse matter of a business model that is evil to the bone.

This is, I wager, on the whole a common sense view, one that I’d momentarily forgotten because of my intellectual milieu but now am ashamed to have overlooked. It is, on the whole, far more optimistic than other attempt to characterize the zeitgeist of new technology economy.

Somehow, this rubric for understanding the digital economy appears to have fallen out of fashion. Davenport and Beck (2001) wrote a business book declaring attention to be “the new currency of business”, which if the prior analysis is correct makes more sense than data being the new currency (or oil) of business. The term appears to have originated in an article by Goldhaber (1997). Ironically, the term appears to have had no uptake in the economics literature, despite it being the key to everything! The concept was understood, however, by Herbert Simon, in 1971 (see also Terranova, 2012):

In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.

(A digitized version of this essay, which amazingly appears to be set by a typewriter and then hand-edited (by Simon himself?) can be found here.)

This is where I bottom out–the discover that the line of thought I’ve been on all day starts with Herbert Simon, that the sciences of the artificial are not new, they are just forgotten (because of the glut of other information), and exhaustingly hyped. The attention economy discovered by Simon explains why each year we are surrounded with new theories about how to organize ourselves with technology, when perhaps the wisest perspectives on these topics are ones that will not hype themselves because their authors cannot tweet from the grave.


Arrieta-Ibarra, Imanol, et al. “Should We Treat Data as Labor? Moving beyond” Free”.” AEA Papers and Proceedings. Vol. 108. 2018.

Brynjolfsson, Erik, and JooHee Oh. “The attention economy: measuring the value of free digital services on the Internet.” (2012).

Calo, Ryan. “Digital market manipulation.” Geo. Wash. L. Rev. 82 (2013): 995.

Davenport, Thomas H., and John C. Beck. The attention economy: Understanding the new currency of business. Harvard Business Press, 2001.

Goldhaber, Michael H. “The attention economy and the net.” First Monday 2.4 (1997).

Simon, Herbert A. “Designing organizations for an information-rich world.” (1971): 37-72.

Terranova, Tiziana. “Attention, economy and the brain.” Culture Machine 13 (2012).


search engines and authoritarian threats

I’ve been intrigued by Daniel Griffin’s tweets lately, which have been about situating some upcoming work of his an Deirdre Mulligan’s regarding the experience of using search engines. There is a lively discussion lately about the experience of those searching for information and the way they respond to misinformation or extremism that they discover through organic use of search engines and media recommendation systems. This is apparently how the concern around “fake news” has developed in the HCI and STS world since it became an issue shortly after the 2016 election.

I do not have much to add to this discussion directly. Consumer misuse of search engines is, to me, analogous to consumer misuse of other forms of print media. I would assume to best solution to it is education in the complete sense, and the problems with the U.S. education system are, despite all good intentions, not HCI problems.

Wearing my privacy researcher hat, however, I have become interested in a different aspect of search engines and the politics around them that is less obvious to the consumer and therefore less popularly discussed, but I fear is more pernicious precisely because it is not part of the general imaginary around search. This is the aspect that is around the tracking of search engine activity, and what it means for this activity to be in the hands of not just such benevolent organizations such as Google, but also such malevolent organizations such as Bizarro World Google*.

Here is the scenario, so to speak: for whatever reason, we begin to see ourselves in a more adversarial relationship with search engines. I mean “search engine” here in the broad sense, including Siri, Alexa, Google News, YouTube, Bing, Baidu, Yandex, and all the more minor search engines embedded in web services and appliances that do something more focused than crawl the whole web. By ‘search engine’ I mean entire UX paradigm of the query into the vast unknown of semantic and semiotic space that contemporary information access depends on. In all these cases, the user is at a systematic disadvantage in the sense that their query is a data point amount many others. The task of the search engine is to predict the desired response to the query and provide it. In return, the search engine gets the query, tied to the identity of the user. That is one piece of a larger mosaic; to be a search engine is to have a picture of a population and their interests and the mandate to categorize and understand those people.

In Western neoliberal political systems the central function of the search engine is realized as commercial transaction facilitating other commercial transactions. My “search” is a consumer service; I “pay” for this search by giving my query to the adjoined advertising function, which allows other commercial providers to “search” for me, indirectly, through the ad auction platform. It is a market with more than just two sides. There’s the consumer who wants information and may be tempted by other information. There are the primary content providers, who satisfy consumer content demand directly. And there are secondary content providers who want to intrude on consumer attention in a systematic and successful way. The commercial, ad-enabled search engine reduces transaction costs for the consumer’s search and sells a fraction of that attentional surplus to the advertisers. Striking the right balance, the consumer is happy enough with the trade.

Part of the success of commercial search engines is the promise of privacy in the sense that the consumer’s queries are entrusted secretly with the engine, and this data is not leaked or sold. Wise people know not to write into email things that they would not want in the worst case exposed to the public. Unwise people are more common than wise people, and ill-considered emails are written all the time. Most unwise people do not come to harm because of this because privacy in email is a de facto standard; it is the very security of email that makes the possibility of its being leaked alarming.

So to with search engine queries. “Ask me anything,” suggests the search engine, “I won’t tell”. “Well, I will reveal your data in an aggregate way; I’ll expose you to selective advertising. But I’m a trusted intermediary. You won’t come to any harms besides exposure to a few ads.”

That is all a safe assumption until it isn’t, at which point we must reconsider the role of the search engine. Suppose that, instead of living in a neoliberal democracy where the free search for information was sanctioned as necessary for the operation of a free market, we lived in an authoritarian country organized around the principle that disloyalty to the state should be crushed.

Under these conditions, the transition of a society into one that depends for its access to information on search engines is quite troubling. The act of looking for information is a political signal. Suppose you are looking for information about an extremist, subversive ideology. To do so is to flag yourself as a potential threat of the state. Suppose that you are looking for information about a morally dubious activity. To do so is to make yourself vulnerable to kompromat.

Under an authoritarian regime, curiosity and free thought are a problem, and a problem that are readily identified by ones search queries. Further, an authoritarian regime benefits if the risks of searching for the ‘wrong’ thing are widely known, since it suppresses inquiry. Hence, the very vaguely announced and, in fact, implausible to implement Social Credit System in China does not need to exist to be effective; people need only believe it exists for it to have a chilling and organizing effect on behavior. That is the lesson of the Foucouldean panopticon: it doesn’t need a guard sitting in it to function.

Do we have a word for this function of search engines in an authoritarian system? We haven’t needed one in our liberal democracy, which perhaps we take for granted. “Censorship” does not apply, because what’s at stake is not speech but the ability to listen and learn. “Surveillance” is too general. It doesn’t capture the specific constraints on acquiring information, on being curious. What is the right term for this threat? What is the term for the corresponding liberty?

I’ll conclude with a chilling thought: when at war, all states are authoritarian, to somebody. Every state has an extremist, subversive ideology that it watches out for and tries in one way or another to suppress. Our search queries are always of strategic or tactical interest to somebody. Search engine policies are always an issue of national security, in one way or another.

Why managerialism: it acknowledges political role of internal corporate policies

One modern difficulty with political theory in contemporary times is the confusion between government and corporate policy. This is due in no small part to the extent to which large corporations now mediate social life. Telecommunications, the Internet, mobile phones, and social media all depend on layers and layers of operating organizations. The search engine, which didn’t exist thirty years ago, now is arguably an essential cultural and political facility (Pasquale, 2011), which sharpens the concerns that have been raised about their politics (Introna and Nissenbaum, 2000; Bracha and Pasquale, 2007).

Corporate policies influence customers when those policies drive product design or are put into contractual agreements. They can also govern employees and shape corporate culture. Sometimes these two kinds of policies are not easily demarcated. For example, Uber has an internal privacy policy about who can access which users’ information, like most companies with a lot of user data. The privacy features that Uber implicitly guarantees to their customers are part of their service. But their ability to provide this service is only as good as their company culture is reliable.

Classically, there are states, which may or may not be corrupt, and there are markets, which may or may not be competitive. With competitive markets, corporate policies are part of what make firms succeed or fail. One point of success is a company’s ability to attract and maintain customers. This should in principle drive companies to improve their policies.

An interesting point made recently by Robert Post is that in some cases, corporate policies can adopt positions that would be endorsed by some legal scholars even if the actual laws state otherwise. His particular example was a case enforcing the right to be forgotten in Spain against Google.

Since European law is statute driven, the judgments of its courts are not amenable to creative legal reasoning as they are in the United States. Post’s criticism of the EU’s judgment in this case is because of their rigid interpetation of data protection directives. Post argues a different legal perspective on privacy is better at balancing other social interests. But putting aside the particulars of the law, Post makes the point that Google’s internal policy matches his own legal and philosophical framework (which prefers dignitary privacy over data privacy) more than EU statutes do.

One could argue that we should not trust the market to make Google’s policies just. But we could also argue that Google’s market share, which is significant, depends so much on its reputation and users trust that in fact it is under great pressure to adjucate disputes with its users wisely. It is a company that must set its own policies, which do have political significance. It has the benefits of more direct control over the way these policies get interpreted and enforced in the state, faster feedback on whether the policies are successful, and a less chaotic legislative process for establishing policy in the first place.

Political liberals would dismiss this kind of corporate control as just one commercial service among many, or else wring their hands with concern over a company coming to have such power over the public sphere. But managerialists would see the emergence of search engines as an organization among others, comparable to other private entities that have been part of the public sphere, such as newspapers.

But a sound analysis of the politics of search engines need not depend on analogies with past technologies. This is a function of legal reasoning. Managerialism, which is perhaps more a descendent of business reasoning, would ask how, in fact, search engines make policy decisions and how does this affect political outcomes. It does not prima facie assume that a powerful or important corporate policy is wrong. It does ask what the best corporate policy is, given a particular sector.


Bracha, Oren, and Frank Pasquale. “Federal Search Commission-Access, Fairness, and Accountability in the Law of Search.” Cornell L. Rev. 93 (2007): 1149.

Introna, Lucas D., and Helen Nissenbaum. “Shaping the Web: Why the politics of search engines matters.” The information society 16.3 (2000): 169-185.

Pasquale, Frank A. “Dominant search engines: an essential cultural & political facility.” (2011).