Digifesto

Category: business

TikTok, Essential Infrastructure, and Imperial Regulation by Golden Share

As Microsoft considers acquiring TikTok’s American operations, President Trump has asked that the Federal Treasury should own a significant share. This move is entirely consistent with this administration’s technology regulation principles, which sees profitable telecommunications and digital services companies as both a cybersecurity attack surface and a prized form of capital that must be “American owned”. Surprising, perhaps, is the idea of partial government ownership. However, this idea has been floated recently by a number of scholars and think tanks. Something like Treasury ownership of company shares could set up institutions that serve not just American economic, but also civic interests.

Jake Goldenfein and I have recently published a piece in Phenomenal World, Essential Infrastructures“. It takes as its cue the recent shift of many “in person” activities onto Zoom during COVID-19 lockdowns to review the prevailing regulatory regimes governing telecommunications infrastructure and digital services. We trace the history of Obama-era net neutrality, grounded in an idea of the Internet as a public utility or essential facility. We then show how in the Trump administration, a new regime based on national and economic security directed Federal policy. We then go into some policy recommendations moving forward.

A significant turning point during the Trump administration has been the shift away from the emphasis on domestic and foreign provision of open Internet in order to provide a competitive market for digital services, and towards the idea that telecom infrastructure and digital services are powerful behemoths that, as critical infrastructure, are vulnerable attack surfaces of the nation but also perhaps the primary form of wealth and source of rents. As any analysis of the stock market, especially since the COVID-19 lockdowns, would tell you, Big Tech has been carrrying the U.S. stock market while other businesses crumble. These new developments continue the trend of the past several years of corporate concentration, and show some of the prescience of the lately hyperactive CFIUS regulatory group, preventing foreign investment in these “critical infrastructure”. This is a defense of American information from foreign investors; it is also a defense of American wealth from competition over otherwise publicly traded assets.

Under the current conditions of markets and corporate structure, which Jake and I analyze in our other recent academic paper, we have to stop looking at “AI” and “data science” as technologies and start looking at them as forms of capital. That is how CFIUS is looking at them. That is how their investors and owners look at them. Many of the well-intentioned debates about “AI ethics” and “technology politics” are eager to keep the conversation in more academically accessible and perhaps less cynical terms. By doing so, they miss the point.

In the “Essential Infrastructures” article, we are struggling with this confluence of the moral/political and the economic. Jake and I are both very influenced by Helen Nissenbaum, who would be quick to point out that when social activities that normally depend on the information affordances of in-person communication go on-line, there is ample reason to suspect that norms will be violated and that the social fabric will undergo an uncomfortable transformation. We draw attention to some of the most personal aspects of life–dating/intimacy, family, religion, and more broadly civil society–which have not depended as much on private capital as infrastructure as they do now. Of course, this is all relative and society has been trending this way for a long time. But COVID lockdowns have brought this condition to a new extreme.

There will be those that argue that there is nothing alarming about every aspect of human life being dependent on private capital infrastructure designed to extract value from them for their corporate owners. Some may find this inevitable, tolerable, even desirable. We wonder who would make a sincere, full-throated defense of this future world. We take a different view. We take as an assumption that the market is one sphere among many and that maintenance of the autonomy of some of the more personal spheres is of moral importance.

Given (because we see no other way about it) that these infrastructures are a form of capital, how can the autonomy of the spheres that depend on them be preserved? In our article, our proposal is that the democratic state provides additional oversight and control over this capital. Recognizing that the state is always an imperfect representative of individuals in their personal domains, it is better than nothing.

We propose that states can engage with infrastructure-as-capital directly as owners and investors, just as other actors interact with it. This proposal accords with other similar proposals for how states might innovate in their creation and maintenance of sovereign wealth since COVID. The editorial for our piece was thorough. Since we drafted it, we have found others who have articulated the logic of the general value of this approach better than we have.

The Berggruen Institute’s Gilman and Feygin (20202) have been active this year in publishing new policy research that is consistent with what we’re proposing. Their proposals for a “mutualist economy” wherein a “national endowment” is built from public investment in technology and intellectual property, which is then either distributed to citizens as Universal Basic Capital or used as a source of wealth by the state, is cool. The Berggruen Institute’s Noema magazine has published the thoughts of Ray Dalio and Joseph Stiglitz about using this approach for corporate bailouts in response to COVID.

These are all good ideas. Our proposal differs only slightly. If national endowment is built from shares in companies that are bailed out during COVID, then the national endowment is unlikely to include those successful FANG companies that are so successfully disrupting and eating the lunch of the companies that are getting bailed out. It would be too bad if the national endowment included only those companies that are failing, while the tech giants on which civil society and state are increasingly dependent attract all the real value to be had.

In our article, we are really proposing that governments–whether federal, state, or even municipal–get themselves a piece of Amazon, Google, and Verizon. The point here is not simply to get more of the profit generated by these firms into democratic coffers. Rather, the point is to shift the balance of power. Our proposal is perhaps more aligned with Hockett and Omarova’s (2017) proposal for a National Investment Authority, and more specifically Omarova’s proposal of a “golden share approach” (2016). Recall that much of the recent activity of CFIUS has been motivated by the understanding that significant shareholders in a private corporation have rights to access information within it. This is why blocking foreign investment in companies has been motivated under a “cybersecurity” rationale. If a foreign owner of, say, Grinder, could extract compromising information from the company in order to blackmail U.S. military personnel, then it could be more difficult to enforce the illegality of that move.

In the United States, there is a legal gap in the regulation of technology companies domestically given their power over personal and civic life. In a different article (2020, June), we argued that technology law and ethics needs to deal with technology as a corporation, rather than a network or assemblage of artifacts and individuals. This is difficult, as these corporations are powerful, directed by shareholders to whom they have a fiduciary duty to maximize profits, and very secretive about their operations. “Sovereign investment”–or, barring that, something similar on a state or local level–would give governments a legal way to review the goings-on in companies that it has a share in. This information access alone could enable further civic oversight and regulatory moves by the government.

When we wrote our article, we did not imagine that soon after it was published the Trump administration would recommend a similar policy for the acquisition of foreign-owned companies that it is threatening to boot off the continent. However, this is one way to get leverage on the problem of how the government can acquire, at low cost, something that is already profitable.

This will likely scare foreign-owned technology companies off of doing business in the U.S. And a U.S.-owned company is likely to fall afoul of other national markets. However, since the Snowden revelations, U.S. companies have been seen, overseas, as extensions of the U.S. state. Schrems II solidifies that view in Europe. Technology markets are already global power-led spheres of influence.

References

Benthall, S., & Goldenfein, J. (2020, June). Data Science and the Decline of Liberal Law and Ethics. In Ethics of Data Science Conference-Sydney.

Gilman, N. and Feygin, Y. (April, 2020), “The Mutualist Economy: A New Deal for Ownership” Whitepaper. Berggruen Institute.

Gilman, N. and Feygin, Y. (June, 2020) “Building Blocks of a National Endowment” Whitepaper. Berggruen Institute.

Hockett, R. C., & Omarova, S. T. (2017). Private Wealth and Public Goods: A Case for a National Investment Authority. J. Corp. L.43, 437.

Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

Omarova, S. T. (2016). Bank Governance and Systemic Stability: The Golden Share Approach. Ala. L. Rev.68, 1029.

Why managerialism: it acknowledges political role of internal corporate policies

One modern difficulty with political theory in contemporary times is the confusion between government and corporate policy. This is due in no small part to the extent to which large corporations now mediate social life. Telecommunications, the Internet, mobile phones, and social media all depend on layers and layers of operating organizations. The search engine, which didn’t exist thirty years ago, now is arguably an essential cultural and political facility (Pasquale, 2011), which sharpens the concerns that have been raised about their politics (Introna and Nissenbaum, 2000; Bracha and Pasquale, 2007).

Corporate policies influence customers when those policies drive product design or are put into contractual agreements. They can also govern employees and shape corporate culture. Sometimes these two kinds of policies are not easily demarcated. For example, Uber has an internal privacy policy about who can access which users’ information, like most companies with a lot of user data. The privacy features that Uber implicitly guarantees to their customers are part of their service. But their ability to provide this service is only as good as their company culture is reliable.

Classically, there are states, which may or may not be corrupt, and there are markets, which may or may not be competitive. With competitive markets, corporate policies are part of what make firms succeed or fail. One point of success is a company’s ability to attract and maintain customers. This should in principle drive companies to improve their policies.

An interesting point made recently by Robert Post is that in some cases, corporate policies can adopt positions that would be endorsed by some legal scholars even if the actual laws state otherwise. His particular example was a case enforcing the right to be forgotten in Spain against Google.

Since European law is statute driven, the judgments of its courts are not amenable to creative legal reasoning as they are in the United States. Post’s criticism of the EU’s judgment in this case is because of their rigid interpetation of data protection directives. Post argues a different legal perspective on privacy is better at balancing other social interests. But putting aside the particulars of the law, Post makes the point that Google’s internal policy matches his own legal and philosophical framework (which prefers dignitary privacy over data privacy) more than EU statutes do.

One could argue that we should not trust the market to make Google’s policies just. But we could also argue that Google’s market share, which is significant, depends so much on its reputation and users trust that in fact it is under great pressure to adjucate disputes with its users wisely. It is a company that must set its own policies, which do have political significance. It has the benefits of more direct control over the way these policies get interpreted and enforced in the state, faster feedback on whether the policies are successful, and a less chaotic legislative process for establishing policy in the first place.

Political liberals would dismiss this kind of corporate control as just one commercial service among many, or else wring their hands with concern over a company coming to have such power over the public sphere. But managerialists would see the emergence of search engines as an organization among others, comparable to other private entities that have been part of the public sphere, such as newspapers.

But a sound analysis of the politics of search engines need not depend on analogies with past technologies. This is a function of legal reasoning. Managerialism, which is perhaps more a descendent of business reasoning, would ask how, in fact, search engines make policy decisions and how does this affect political outcomes. It does not prima facie assume that a powerful or important corporate policy is wrong. It does ask what the best corporate policy is, given a particular sector.

References

Bracha, Oren, and Frank Pasquale. “Federal Search Commission-Access, Fairness, and Accountability in the Law of Search.” Cornell L. Rev. 93 (2007): 1149.

Introna, Lucas D., and Helen Nissenbaum. “Shaping the Web: Why the politics of search engines matters.” The information society 16.3 (2000): 169-185.

Pasquale, Frank A. “Dominant search engines: an essential cultural & political facility.” (2011).

industrial technology development and academic research

I now split my time between industrial technology (software) development and academic research.

There is a sense in which both activities are “scientific”. They both require the consistent use of reason and investigation to arrive at reliable forms of knowledge. My industrial and academic specializations are closely enough aligned that both aim to create some form of computational product. These activities are constantly informing one another.

What is the difference between these two activities?

One difference is that industrial work pays a lot better than academic work. This is probably the most salient difference in my experience.

Another difference is that academic work is more “basic” and less “applied”, allowing it to address more speculative questions.

You might think that the latter kind of work is more “fun”. But really, I find both kinds of work fun. Fun-factor is not an important difference for me.

What are other differences?

Here’s one: I find myself emotionally moved and engaged by my academic work in certain ways. I suppose that since my academic work straddles technology research and ethics research (I’m studying privacy-by-design), one thing I’m doing when I do this work is engaging and refining my moral intuitions. This is rewarding.

I do sometimes also feel that it is self-indulgent, because one thing that thinking about ethics isn’t is taking responsibility for real change in the world. And here I’ll express an opinion that is unpopular in academia, which is that being in industry is about taking responsibility for real change in the world. This change can benefit other people, and it’s good when people in industry get paid well because they are doing hard work that entails real risks. Part of the risk is the responsibility that comes with action in an uncertain world.

Another critically important difference between industrial technology development and academic research is that while the knowledge created by the former is designed foremost to be deployed and used, the knowledge created by the latter is designed to be taught. As I get older and more advanced as a researcher, I see that this difference is actually an essential one. Knowledge that is designed to be taught needs to be teachable to students, and students are generally coming from both a shallower and more narrow background than adult professionals. Knowledge that is designed to by deployed and used need only be truly shared by a small number of experienced practitioners. Most of the people affected by the knowledge will be affected by it indirectly, via artifacts. It can be opaque to them.

Industrial technology production changes the way the world works and makes the world more opaque. Academic research changes the way people work, and reveals things about the world that had been hidden or unknown.

When straddling both worlds, it becomes quite clear that while students are taught that academic scientists are at the frontier of knowledge, ahead of everybody else, they are actually far behind what’s being done in industry. The constraint that academic research must be taught actually drags its form of science far behind what’s being done regularly in industry.

This is humbling for academic science. But it doesn’t make it any less important. Rather, in makes it even more important, but not because of the heroic status of academic researchers being at the top of the pyramid of human knowledge. It’s because the health of the social system depends on its renewal through the education system. If most knowledge is held in secret and deployed but not passed on, we will find ourselves in a society that is increasingly mysterious and out of our control. Academic research is about advancing the knowledge that is available for education. It’s effects can take half a generation or longer to come to fruition. Against this long-term signal, the oscillations that happen within industrial knowledge, which are very real, do fade into the background. Though not before having real and often lasting effects.

Bay Area Rationalists

There is an interesting thing happening. Let me just try to lay down some facts.

There are a number of organizations in the Bay Area right now up to related things.

  • Machine Intelligence Research Institute (MIRI). Researches the implications of machine intelligence on the world, especially the possibility of super-human general intelligences. Recently changed their name from the Singularity Institute due to the meaninglessness of the term Singularity. I interviewed their Executive Director (CEO?), Luke Meuhlhauser, a while back. (I followed up on some of the reasoning there with him here).
  • Center for Applied Rationality (CFAR). Runs workshops training people in rationality, applying cognitive science to life choices. Trying to transition from appearing to pitch a “world-view” to teaching a “martial art” (I’ve sat in on a couple of their meetings). They aim to grow out a large network of people practicing these skills, because they think it will make the world a better place.
  • Leverage Research. A think-tank with an elaborate plan to save the world. Their research puts a lot of emphasis on how to design and market ideologies. I’ve been told that they recently moved to the Bay Area to be closer to CFAR.

Some things seem to connect these groups. First, socially, they all seem to know each other (I just went to a party where a lot of members of each group were represented.) Second, the organizations seem to get the majority of their funding from roughly the same people–Peter Thiel, Luke Nosek, and Jaan Tallinn, all successful tech entrepreneurs turned investors with interest in stuff like transhumanism, the Singularity, and advancing rationality in society. They seem to be employing a considerable number of people to perform research on topics normally ignored in academia and spread an ideology and/or set of epistemic practices. Third, there seems to be a general social affiliation with LessWrong.com; I gather a lot of the members of this community originally networked on that site.

There’s a lot that’s interesting about what’s going on here. A network of startups, research institutions, and training/networking organizations is forming around a cluster of ideas: the psychological and technical advancement of humanity, being smarter, making machines smarter, being rational or making machines to be rational for us. It is as far as I can tell largely off the radar of “mainstream” academic thinking. As a network, it seems concerned with growing to gather into itself effective and connected people. But it’s not drawing from many established bases of effective and connected people (the academic establishment, the government establishment, the finance establishment, “old boys networks” per se, etc.) but rather is growing its own base of enthusiasts.

I’ve had a lot of conversations with people in this community now. Some, but not all, would compare what they are doing to the starting of a religion. I think that’s pretty accurate based on what I’ve seen so far. Where I’m from, we’ve always talked about Singularitarianism as “eschatology for nerds”. But here we have all these ideas–the Singularity, “catastrophic risk”, the intellectual and ethical demands of “science”, the potential of immortality through transhumanist medicine, etc.–really motivating people to get together, form a community, advance certain practices and investigations, and proselytize.

I guess what I’m saying is: I don’t think it’s just a joke any more. There is actually a religion starting up around this. Granted, I’m in California now and as far as I can tell there are like sixty religions out here I’ve never heard of (I chalk it up to the lack of population density and suburban sprawl). But this one has some monetary and intellectual umph behind it.

Personally, I find this whole gestalt both attractive and concerning. As you might imagine, diversity is not this group’s strong suit. And its intellectual milieu reflects its isolation from the academic mainstream in that it lacks the kind of checks and balances afforded by multidisciplinary politics. Rather, it appears to have more or less declared the superiority of its methodological and ideological assumptions to its satisfaction and convinced itself that it’s ahead of the game. Maybe that’s true, but in my own experience, that’s not how it really works. (I used to share most of the tenets of this rationalist ideology, but have deliberately exposed myself to a lot of other perspectives since then [I think that taking the Bayesian perspective seriously necessitates taking the search for new information very seriously]. Turns out I used to be wrong about a lot of things.)

So if I were to make a prediction, it would go like this. One of these things is going to happen:

  • This group is going to grow to become a powerful but insulated elite with an expanded network and increasingly esoteric practices. An orthodox cabal seizes power where they are able, and isolates itself into certain functional roles within society with a very high standard of living.
  • In order to remain consistent with its own extraordinarily high epistemic standards, this network starts to assimilate other perspectives and points of view in an inclusive way. In the process, it discovers humility, starts to adapt proactively and in a decentralized way, losing its coherence but perhaps becomes a general influence on the preexisting societal institutions rather than a new one.
  • Hybrid models. Priesthood/lay practitioners. Or denominational schism.

There is a good story here, somewhere. If I were a journalist, I would get in on this and publish something about it, just because there is such a great opportunity for sensationalist exploitation.

Don’t use Venn diagrams like this

Today I saw this whitepaper by Esri about their use of open source software. It’s old, but still kept my attention.

There’s several reasons why this paper is interesting. One reason is that it reflects the trend of companies that once used FUD tactics around open source software to singing a soothing song of compatibilism. It makes an admirable effort to explain the differences between open source, proprietary software, and open standards to its enterprise client audience. That is the good news.

The bad news is that since this new compatibilism is just bending to market pressure after the rise of successful open source software complements, it lacks an understanding of why the open source development process has caused those market successes. Of course, proprietary companies have good reason to blur these lines, because otherwise they would need to acknowledge the existence of open source substitutes. In Esri’s case, that would mean products like the OpenGeo Suite.

I probably wouldn’t have written this post if it were not for this Venn diagram, which is presented with the caption A hybrid relationship:

I don’t think there is a way to interpret this diagram in a way that makes sense. It correctly identifies that Closed Source, Open Source, and Open Standards are different. But what do the overlapping regions represent? Presumabely they are meant to indicate that a system may both be open source and use open standards, or have open standards and be closed, or…be both open and closed?

It’s a subtle point but the semantics of set containment implied by the Venn diagram really don’t apply here. A system that’s a ‘hybrid’ between a closed and open software is not “both” closed and open the same way closed software that uses open standards is “both” closed and open. Rather, the hybrid system is just that, a hybrid, which means that its architecture is going to suffer tradeoffs as different components have different properties.

I don’t think that the author of this whitepaper was trying to deliberately obscure this idea. But I think that they didn’t know or care about it. That’s a problem, because it’s marketing material like this that clouds the picture about the value of open source. At a pointy-haired managerial level, one can answer the question “why aren’t you using more open source software” with a glib, “oh, we’re using a hybrid model, tailored to our needs.” But unless you actually understand what you’re talking about, your technical stack may still be full of buggy and unaccountable software, without you even knowing it.

The open source acqui-hire

There’s some interesting commentary around Twitter’s recent acquisition, Whisper Systems:

Twitter has begun to open source the software built by Whisper Systems, the enterprise mobile security startup it acquired just three weeks ago. …This move confirms the, well, whipsers that the Whisper Systems deal was mostly made for acqui-hire purposes.

Another acquisition like this that comes to mind is Etherpad, which Google bought (presumably to get the Etherpad team working on Wave) then open sourced. The logic of these acquisitions is that the talent is what matters, the IP is incidental or perhaps better served by an open community.

When I talk to actual or aspiring entrepreneurs, they often make the assumption that it would spoil their business to start building out their product open source. For one thing, they argue, there will be competitors who launch their own startups off of the open innovation. Then, they will miss their chance at a big exit because there will be no IP to tempt Facebook or whoever else to buy them out.

These open source acqui-hires defy these concerns. Demonstrating talent is part of what makes one acquirable. Logically, then, starting a competing company based on technology in which you don’t have talent makes you less competitive, from the perspective of a market exit. It’s hard to see what kind of competitive advantage the copycat company would have, really, since it doesn’t have the expertise in technology that comes from building it. If they do find some competitive advantage (perhaps they speak a foreign language and so are able to target a different market), then they are natural partners, not natural competitors.

One can take this argument further. Making open and available software is one of the best ways for a developer to make others aware of their talents and increase the demand (and value) for their own labor. So the talent in an open source company should be on average more valuable in case of an acqui-hire.

This doesn’t seem like a bad way out for a talented entrepreneur. Why, then, is this not a more well-known model for startups?

One reason is that the real winners in the startup scene are not the entrepreneurs. It’s the funders, and to the funders it is more worthwhile to invest in several different technologies with the small chance of selling one off big than to invest in the market value of their entrepreneurs. Because, after all, venture capitalists are in the same war for engineering talent as Google, Facebook, etc.. This should become less of an issue, however, as crowdfunding becomes more viable.