Digifesto

Tag: google

Big tech surveillance and human rights

I’ve been alarmed by two articles to cross my radar today.

  • Bloomberg Law has given a roundup on the contributions Google and Facebook have given to tech policy advocacy groups. Long story short: they give a lot of money, and while these groups say they are not influenced by the donations, they tend to favor privacy policies that do not interfere with the business models of these Big Tech companies.
  • Amnesty International has put out a report arguing that the business models of Google and Facebook are “an unprecedented danger to human rights”.

Surveillance Giants lays out how the surveillance-based business model of Facebook and Google is inherently incompatible with the right to privacy and poses a systemic threat to a range of other rights including freedom of opinion and expression, freedom of thought, and the right to equality and non-discrimination.

Amnesty International

Until today, I never had a reason to question the judgment of Amnesty International. I have taken seriously their perspective as an independent watchdog group looking out for human rights. Could it be that Google and Facebook have, all this time, been violating human rights left and right? Have I been a victim of human rights abuses from the social media sites I’ve used since college?

This is a troubling thought, especially as an academic researcher who has invested a great deal of time studying technology policy. While in graduate school, the most lauded technology policy think tanks, those that were considered most prestigious and genuine, such as the Center for Democracy and Technology (CDT), are precisely those listed by the Bloomberg Law article as having been in essence supporting the business models of Google and Facebook all along. Now I’m in moral doubt. Amnesty International has condemned Google of human rights violations for the sake of profit, with CDT (for example) as an ideological mouthpiece.

Elsewhere in my academic work it’s come to light that what is an increasingly popular, arguably increasingly consensus view of technology policy is a direct contradiction of the business model and incentives of companies like Google and Facebook. The other day colleagues and I did a close read of the New York Privacy Act (NYPA), which is not under consideration. New York State’s answer to the CCPA is notable in that it foregrounds Jack Balkin’s notion of an information fiduciary. According to the current draft, data controllers (it uses this EU-inspired language) would have a fiduciary duty to consumers, who are natural persons (but not independent contractors, such as Uber drivers) whose data is being collected. This bill, in its current form, requires that the data controller put its care and responsibility of the consumer over and above its fiduciary duty to its shareholders. Since Google and Facebook are (at least) two-sided markets, with consumers making up only one side, this (if taken seriously) has major implications for how these Big Tech companies operate with respect to New York residents. Arguably, it would require these companies to put the interests of the consumers that are their users ahead of the interests of their real customers, the advertisers–which pay the revenue that goes to shareholders.

If all data controllers were information fiduciaries, that would almost certainly settle the human rights issues raised by Amnesty International. But how likely is this strong language to survive the legislative process in New York?

There are two questions on my mind after considering all this. The first is what the limits of Silicon Valley self-regulation are. I’m reminded of an article by Mulligan and Griffin about Google’s search engine results. For a time, when a user queried “Did the holocaust happen?” the first search results would deny the holocaust. This prompted the Mulligan and Griffin article about what principles could be used to guide search engine behavior besides the ones used to design the search engine initially. Their conclusion is that human rights, as recognized and international experts, could provide those principles.

The essay concludes by offering a way forward grounded in developments in business and human rights. The emerging soft law requirement that businesses respect and remedy human rights violations entangled in their business operations provides a normative basis for rescripting search. The final section of this essay argues that the “right to truth,” increasingly recognized in human rights law as both an individual and collective right in the wake of human rights atrocities, is directly affected by Google and other search engine providers’ search script. Returning accurate information about human rights atrocities— specifically, historical facts established by a court of law or a truth commission established to document and remedy serious and systematic human rights violations—in response to queries about those human rights atrocities would make good on search engine providers’ obligations to respect human rights but keep adjudications of truth with politically legitimate expert decision makers. At the same time, the right to freedom of expression and access to information provides a basis for rejecting many other demands to deviate from the script of search. Thus, the business and human rights framework provides a moral and legal basis for rescripting search and for cabining that rescription.

Mulligan and Griffin, 2018

Google now returns different results when asked “Did the holocaust happen?”. The first hit is the Wikipedia page for “Holocaust denial”, which states clearly that the views of Holocaust deniers are false. The moral case on this issue has been won.

Is it realistic to think that the moral case will be won when the moral case directly contradicts the core business model of these companies? That is perhaps akin to believing that medical insurance companies in the U.S. will cave to moral pressure and change the health care system in recognition of the human right to health.

These are the extreme views available at the moment:

  • Privacy is a human right, and our rights are being trod on by Google and Facebook. The ideology that has enabled this has been propagated by non-profit advocacy groups and educational institutions funded by those companies. The human rights of consumers suffer under unchecked corporate control.
  • Privacy, as imagined by Amnesty International, is not a human right. They have overstated their moral case. Google and Facebook are intelligent consumer services that operate unproblematically in a broad commercial marketplace for web services. There’s nothing to see here, or worry about.

I’m inclined towards the latter view, if only because the “business model as a human rights violation” angle seems to ignore how services like Google and Facebook add value for users. They do this by lowering search costs, which requires personalized search and data collection. There seem to be some necessary trade-offs between lowering search costs broadly–especially search costs when what’s being searched for is people–and autonomy. But unless these complex trade-offs are untangled, the normative case will be unclear and business will proceed simply as usual.

References

Mulligan, D. K., & Griffin, D. (2018). Rescripting Search to Respect the Right to Truth.

Sneaky Google Search UI Design

I am very impressed by Google’s web design team right now.

Maybe you’ve noted this; maybe you haven’t. Go to Google’s home page right now. Enter the URL into your browser, then hit “Enter.” Then do nothing.

What do you see? Nothing but Google Search, pure and simple.

It’s not until you trigger some browser event by moving your mouse over the web page document or causing the search field to lose focus that the rest of the user interface fades in.

Smooth.

It’s hard to do a before/after in a screenshot because (I didn’t know this…) Print Screen triggers the necessary browser event. But in fact that makes it easier to catch it mid-fade. Note the weak blues:

Google's links fading in

This is brilliant. It shows, first off, that the designers were very familiar with what is possible in the browser. Rather than seeing a page as a static document, it is a temporal continuum. Though this isn’t news to a lot of web designers, this sort of design ingenuity would get lost in a lot of organizational web design workflows because this kind of detail can’t get communicated to developers easily by wireframes or mockups.

The design shows a sensitivity to user psychology that is almost touching. The user who goes to the Google home page to search for something will never see the links. Load page. Text field in focus. Type. Enter. Done. Most web sites would consider this kind of streamlined workflow the epitome of design. But this design doesn’t just provide the attentive user an unobstructed path, it shields their unconscious from distraction.

Bravo.

And yet I find something insidious about it. It make me feel like Google is hiding something. People used to say things like, “Google’s search interface is so clean and simple! Google is just about providing really smart web search. Google is just smart guys hiding unobtrusively behind text field that can’t possible take over your world.” It’s a precious public image.

Their home page’s increasing clutter belies that image with extra tabs, advanced search options, links to Business Solutions, and more. And now that search results have a left sidebar with such simply utilitarian features as the Wonder Wheel, even our lizard-evolved brain stems may start to suspect that Google has outgrown its original simplicity. It has become impossible to ignore the existence of the corporate behemoth behind the services we’ve entangled ourselves in.

The fluidly fading design tilts toward innocent bliss. Awareness of Google-the-corporation is now strictly opt-in. I can’t tell if that is considerate or creepy.