Digifesto

Trade secrecy, “an FDA for algorithms”, a software bills of materials (SBOM) #SecretAlgos

At the Conference on Trade Secrets and Algorithmic Systems at NYU today, the target of most critiques is the use of trade secrecy by proprietary technology providers to prevent courts and the public from seeing the inner workings of algorithms that determine people’s credit scores, health care, criminal sentencing, and so on. The overarching theme is that sometimes companies will use trade secrecy to hide the ways that their software is bad, and that that is a problem.

In one panel, the question of whether an “FDA for Algorithms” is on the table–referring the Food and Drug Administration’s approval of pharmaceuticals. It was not dealt with in too much depth, which is too bad, because it is a nice example of how government oversight of potentially dangerous technology is managed in a way that respects trade secrecy.

According to this article, when filing for FDA approval, a company can declare some of their ingredients to be trade secrets. The upshot of that is that those trade secrets are not subject to FOIA requests. However, these ingredients are still considered when approval is granted by the FDA.

It so happens that in the cybersecurity policy conversation (more so than in privacy) the question of openness of “ingredients” to inspection has been coming up in a serious way. NTIA has been hosting multistakeholder meetings about standards and policy around Software Component Transparency. In particular they are encouraging standardizations of Software Bills of Materials (SBOM) like the Linux Foundation’s Software Package Data Exchange (SPDX). SPDX (and SBOM’s more generally) describe the “ingredients” in a software package at a higher level of resolution than exposing the full source code, but at a level specific enough useful for security audits.

It’s possible that a similar method could be used for algorithmic audits with fairness (i.e., nondiscrimination compliance) and privacy (i.e., information sharing to third-parties) in mind. Particular components could be audited (perhaps in a way that protects trade secrecy), and then those components could be listed as “ingredients” by other vendors.

Advertisements

The paradox of ‘data markets’

We often hear that companies are “selling out data”, or that we are “paying for services” with our data. Data brokers literally buy and sell data about people. There are other forms of expensive data sources or data sets. There is, undoubtedly, one or more data markets.

We know that classically, perfect competition in markets depends on perfect information. Buyers and sellers on the market need to have equal and instantaneous access to information about utility curves and prices in order for the market to price things efficiently.

Since the bread and butter of the data market is information asymmetry, we know that data markets can never be perfectly competitive. If it was, the data market would cease to exist, because the perfect information condition would entail that there is nothing to buy and sell.

Data markets therefore have to be imperfectly competitive. But since these are the markets that perfect information in other markets might depend on, this imperfection is viral. The vicissitudes of the data market are the vicissitudes of the economy in general.

The upshot is that the challenges of information economics are not only those that appear in special sectors like insurance markets. They are at the heart of all economic activity, and there are no equilibrium guarantees.