instrumental realism and reproducibility in science and society
In Instrumental Realism, Ihde does a complimentary treatment of Ackerman’s Data, Instruments, and Theory (1985), which is positioned as a rebuttal to Kuhn. It is a defense of the idea of scientific progress, which is so disliked by critical scholarship. The key issue is are relativistic attacks on scientific progression that point out, for example, the ways in which theory shapes observation, which undermines the objectivity of observation. Ackerman’s rebuttal is that science does not progress through advance of theory, but rather through advance of instrumentation. Instruments allow data to be collected independently of theory. This creates and bounds “data domains”–fields of “data text” that can then be the site of scientific controversy and resolution.
The paradigmatic scientific instruments in Ackerman’s analysis are the telescope and the microscope. But it’s worthwhile thinking about what this means for the computational tools of “data science”.
Certainly, there has been a great amount of work done on the design and standardization of computational tools, and these tools work with ever increasing speed and robustness.
One of the most controversial points made in research today is the idea that the design and/or of these computational tools encodes some kind of bias that threatens the objectivity of their results.
One story, perhaps a straw man, for how this can happen is this: the creators of these tools have (perhaps unconscious) theoretical presuppositions that are the psychological encoding of political power dynamics. These psychological biases impact their judgment as they use tools. This sociotechnical system is therefore biased as the people in it are biased.
Ackerman’s line of argument suggests that the tools, if well designed, will create a “data domain” that might be interpeted in a biased way, but that this concern is separable from the design of the tools themselves.
A stronger (but then perhaps even harder to defend) argument would be that the tools themselves are designed in such a way that the data domain is biased.
Notably, the question of scientific objectivity depends on a rather complex and therefore obscure supply chain of hardware and software. Locating the bias in it must be extraordinarily difficult. In general, the solution to handling this complexity must be modularity and standardization: each component is responsible for something small and well understood, which provides a “data domain” available for downstream use. This is indeed what the API design of software packages is doing. The individual components are tested for reproducible performance and indeed are so robust that, like most infrastructure, we take them for granted.
The push for “reproducibility” in computational science is a further example of refinement of scientific instruments. Today, we see the effort to provide duplicable computational environments with Docker containers, with preserved random seeds, and appropriately versioned dependencies, so that the results of a particular scientific project are maintained despite the constant churn of software, hardware, and networks that undergird scientific communication and practice (let alone all the other communication and practice it undergirds).
The fetishization of technology today has many searching for the location of societal ills within the modules of this great machine. If society, running on this machine, has a problem, there must be a bug in it somewhere! But the modules are all very well tested. It is far more likely that the bug is in their composition. An integration error.
The solution (if there is a solution, and if there isn’t, why bother?) has to be to instrument the integration.