Big tech surveillance and human rights
by Sebastian Benthall
I’ve been alarmed by two articles to cross my radar today.
- Bloomberg Law has given a roundup on the contributions Google and Facebook have given to tech policy advocacy groups. Long story short: they give a lot of money, and while these groups say they are not influenced by the donations, they tend to favor privacy policies that do not interfere with the business models of these Big Tech companies.
- Amnesty International has put out a report arguing that the business models of Google and Facebook are “an unprecedented danger to human rights”.
Surveillance Giants lays out how the surveillance-based business model of Facebook and Google is inherently incompatible with the right to privacy and poses a systemic threat to a range of other rights including freedom of opinion and expression, freedom of thought, and the right to equality and non-discrimination.
Amnesty International
Until today, I never had a reason to question the judgment of Amnesty International. I have taken seriously their perspective as an independent watchdog group looking out for human rights. Could it be that Google and Facebook have, all this time, been violating human rights left and right? Have I been a victim of human rights abuses from the social media sites I’ve used since college?
This is a troubling thought, especially as an academic researcher who has invested a great deal of time studying technology policy. While in graduate school, the most lauded technology policy think tanks, those that were considered most prestigious and genuine, such as the Center for Democracy and Technology (CDT), are precisely those listed by the Bloomberg Law article as having been in essence supporting the business models of Google and Facebook all along. Now I’m in moral doubt. Amnesty International has condemned Google of human rights violations for the sake of profit, with CDT (for example) as an ideological mouthpiece.
Elsewhere in my academic work it’s come to light that what is an increasingly popular, arguably increasingly consensus view of technology policy is a direct contradiction of the business model and incentives of companies like Google and Facebook. The other day colleagues and I did a close read of the New York Privacy Act (NYPA), which is not under consideration. New York State’s answer to the CCPA is notable in that it foregrounds Jack Balkin’s notion of an information fiduciary. According to the current draft, data controllers (it uses this EU-inspired language) would have a fiduciary duty to consumers, who are natural persons (but not independent contractors, such as Uber drivers) whose data is being collected. This bill, in its current form, requires that the data controller put its care and responsibility of the consumer over and above its fiduciary duty to its shareholders. Since Google and Facebook are (at least) two-sided markets, with consumers making up only one side, this (if taken seriously) has major implications for how these Big Tech companies operate with respect to New York residents. Arguably, it would require these companies to put the interests of the consumers that are their users ahead of the interests of their real customers, the advertisers–which pay the revenue that goes to shareholders.
If all data controllers were information fiduciaries, that would almost certainly settle the human rights issues raised by Amnesty International. But how likely is this strong language to survive the legislative process in New York?
There are two questions on my mind after considering all this. The first is what the limits of Silicon Valley self-regulation are. I’m reminded of an article by Mulligan and Griffin about Google’s search engine results. For a time, when a user queried “Did the holocaust happen?” the first search results would deny the holocaust. This prompted the Mulligan and Griffin article about what principles could be used to guide search engine behavior besides the ones used to design the search engine initially. Their conclusion is that human rights, as recognized and international experts, could provide those principles.
The essay concludes by offering a way forward grounded in developments in business and human rights. The emerging soft law requirement that businesses respect and remedy human rights violations entangled in their business operations provides a normative basis for rescripting search. The final section of this essay argues that the “right to truth,” increasingly recognized in human rights law as both an individual and collective right in the wake of human rights atrocities, is directly affected by Google and other search engine providers’ search script. Returning accurate information about human rights atrocities— specifically, historical facts established by a court of law or a truth commission established to document and remedy serious and systematic human rights violations—in response to queries about those human rights atrocities would make good on search engine providers’ obligations to respect human rights but keep adjudications of truth with politically legitimate expert decision makers. At the same time, the right to freedom of expression and access to information provides a basis for rejecting many other demands to deviate from the script of search. Thus, the business and human rights framework provides a moral and legal basis for rescripting search and for cabining that rescription.
Mulligan and Griffin, 2018
Google now returns different results when asked “Did the holocaust happen?”. The first hit is the Wikipedia page for “Holocaust denial”, which states clearly that the views of Holocaust deniers are false. The moral case on this issue has been won.
Is it realistic to think that the moral case will be won when the moral case directly contradicts the core business model of these companies? That is perhaps akin to believing that medical insurance companies in the U.S. will cave to moral pressure and change the health care system in recognition of the human right to health.
These are the extreme views available at the moment:
- Privacy is a human right, and our rights are being trod on by Google and Facebook. The ideology that has enabled this has been propagated by non-profit advocacy groups and educational institutions funded by those companies. The human rights of consumers suffer under unchecked corporate control.
- Privacy, as imagined by Amnesty International, is not a human right. They have overstated their moral case. Google and Facebook are intelligent consumer services that operate unproblematically in a broad commercial marketplace for web services. There’s nothing to see here, or worry about.
I’m inclined towards the latter view, if only because the “business model as a human rights violation” angle seems to ignore how services like Google and Facebook add value for users. They do this by lowering search costs, which requires personalized search and data collection. There seem to be some necessary trade-offs between lowering search costs broadly–especially search costs when what’s being searched for is people–and autonomy. But unless these complex trade-offs are untangled, the normative case will be unclear and business will proceed simply as usual.
References
Mulligan, D. K., & Griffin, D. (2018). Rescripting Search to Respect the Right to Truth.