Digifesto

Category: science

how the science is going

Some years ago I entered a PhD program with the intention to become a Scientist. I had some funny ideas about what this meant that were more informed by reading philosophy than by scientific practice. By Science, I was thinking of something more like the German Wissenschaft than what are by and large more narrowly circumscribed institutions that are dominant in the United States today. I did, and still do, believe in, the pursuit of knowledge through rigorous inquiry.

Last week I attended the Principal Investigator (PI) meetings for the “Designing Accountable Software Systems” (DASS) program of the National Science Foundation. Attending those meetings, I at last felt like I made it. I am a scientist! Also in attendance were a host of colleagues whom I respect, with a shared interest in how to make “software systems” (a good name for the ubiquitous “digital” infrastructure that pervades everything now) more accountable to law and social norms. There were a bunch of computer scientists, but also many law professors, and some social scientists as well. What we were doing there was coming up with ideas for what the next project call under this program should be about. It was an open discussion about the problems in the world, and the role of science in fixing them. There’s a real possibility that these conversations will steer the future of research funding and in a small way nudge the future forward. Glorious.

In the past year, I’ve had a few professional developments that reinforce my feeling of being on the right track. A couple more of the grants I’ve applied to have landed. This has shifted my mindset about my work from one of scarcity (“Ack! What happens if I run out of funding!”) to one of abundance (“Ack! How am I going to hire somebody who I can pay with this funding!”). People often refer to academic careers as “precarious” up until one gets a tenure-track job, or even tenure. I’ve felt that precariousness in my own career. I still feel lower on the academic totem poll because I haven’t found a tenure track job as a professor, or a well-remunerated industrial research lab job. So I don’t take my current surplus for granted, and am well aware of all the talent swirling around that is thirsty for opportunities. But what people say is that it’s hard to get your first grant, and that it gets easier after that. There are full-time “soft money” researchers in my network who are career inspirations.

Another development is that I’ve been granted Principal Investigator status at New York University School of Law, which means I can officially manage my own grants there without a professor technically signing off or supervising the work. This is a tremendous gift of independence from Dr. Katherine Strandburg, my long-time mentor and supervisor at NYU’s Information Law Institute, where I’ve been affiliated for many years. It would be impossible to overstate Dr. Strandburg’s gentle and supportive influence on my postdoctoral work. I have been so fortunate to work with such a brilliant, nimble, open-minded, and sincerely good professor for the years I have been at NYU, both in direct collaboration and at the Information Law Institute, which, in her image, is an endless source of joyful intellectual stimulation.

Law schools are a funny place to do scientific research. They are naturally interdisciplinary in terms of scholarship — law professors work in many social scientific and technical disciplines, besides their own discipline of law. They are funded primarily by law student tuition, and so they are in many ways a professional school. But law is an inherently normative field — laws are norms — and so the question of how to design systems according to ethics and justice is part of the trade. Today, with the ubiquity of “software systems” — the Internet, “Big Data” a decade ago, “AI” today — the need for a rigorous science of sociotechnical systems is ever-present. Law schools are fine places to do that work.

However, law schools are often short on technical talent. Fortunately, I am also appointed at the International Computer Science Institute (ICSI), which is based in Berkeley, California, a non-profit lab that spun out of UC Berkeley’s (UCB) Computer Science department. I also have PI status at ICSI, and am running a couple of grants out of there at the moment.

Working as a remote PI for ICSI is a very funny “return” to Berkeley. I did my doctorate at UCB’s Information School but completed the work in New York, working with Helen Nissenbaum, who at the time was leaving NYU (and the Information Law Institute she co-founded with Kathy Strandburg) to start the Digital Life Initiative at Cornell Tech. I never expected to come “back” to Berkeley, in no small part because I discovered in California that I am not, in fact, a Californian. But the remote appointment, at a place free from the university politics and bureaucracy that drive people over there crazy, fits just right. There are some computer science all-stars that did groundbreaking work still at ICSI to look up to, and a lot of researchers in my own generation who work on very related problems of technical accountability. It is a good place to do scientific work oriented towards objective evaluation and design of sociotechnical systems.

All this means that I feel that somehow, despite pursuing an aggressively — some would say foolishly — interdisciplinary path, I have a certain amount of security in my position as a scientist. This has been the product of luck and hard work and stubbornness and to some extent an inability to do anything else. How many times have I cursed my circumstances and decisions, with setback after setback and failure after failure? I’ve lost count. The hard truth facing anybody pursuing Wissenschaft in the 21st century is that knowledge is socially constructed, and that the pursuit of objective knowledge must be performed from a very carefully constructed social position. But they do exist.

What now? Well, I find that now that I am actually positioned to make progress on the research projects that I see as most dear, I am behind on all of them. Projects that were launched years ago still haven’t manifested. I am painfully aware, every day, of the gap between what I have set out to accomplish and what I can tell the world I’ve done. Collaborating with other talented, serious people can be hard work; sometimes personality, my own or others’, makes it harder. I look on my past publications and they seem like naive sketches towards something better. I present my current work to colleagues and sometimes they find it hard to understand. It is isolating. But I do understand that this is to some extent exactly how it must be, if I’m making real progress on my own agenda. I am very hopeful that my best work is ahead of me.

So many projects!

I’m very happy with my current projects. Things are absolutely clicking. Here is some of what’s up.

I’ve been working alongside DARPA-recipients Dr. Liechty, Zak David, and Dr. Chris Carroll to develop SHARKFin, an open source system for Simulating Heterogeneous Agents with Rational Knowledge and the Financial system. SHARKFin builds on HARK, a system for macroeconomic modeling, but adds integration with complex ABM-based simulations of the financial system. The project straddles the gap between traditional macroeconomic theory and ABM-driven financial research in the style of Blake LeBaron and Richard Bookstaber. I’m so lucky to be part of the project; it’s a fascinating and important set of problems.

Particularly of interest is the challenge of reconciling the Rational Expectations assumption in economics — the idea that agents in a model know the model that they are in and act rationally within it — with the realities of the complexity of the financial system and the intractability that perhaps introduces into the model. The key question we seem to keep asking ourselves is: Is this publishable in an Economics Journal? Being perhaps too contrarian, I wonder: what does it mean for economics-driven public policy if intractable complexity is endogenous the the system? In a perhaps more speculative and ambitious project with Aniket Kesari, we are exploring inefficiencies in the data economy due to the problems with data market microstructure.

Because information asymmetries and bounded rationality increase this complexity, my core NSF research project, which was to develop a framework for heterogeneous agent modeling of the data economy, runs directly into this modeling tractability problems. Luckily for me, I’ve been attending meetings of the Causal Influence Working Group, which is working on many foundational issues in influence diagrams. This exposure has been most useful in helping me think through the design of multi-agent influence diagram models, which is my preferred modeling technique because of how it naturally handles situated information flows.

On the privacy research front, I’m working with Rachel Cummings on integrating Differential Privacy and Contextual Integrity. These two frameworks are like peanut butter and jelly — quite unlike each other, and better together. We’ve gotten a good reception for these ideas at PLSC ’22 and PEPR ’22, and will be presenting a poster about it this week at TPDP ’22. I think influence diagrams are going to help us with this integration as well!

Meanwhile, I have an ongoing project with David Shekman wherein we are surveying the legal and technical foundations for fiduciary duties for computational systems. I’ve come to see this as the right intersection between Contextual Integrity and aligned data policy initiatives and the AI Safety research agenda, specifically AI Alignment problems. While often considered a different disciplinary domain, I see this problem as the flip side of the problems that come up in the general data economy problem. I expect the results, once they start manifesting, to spill over onto each other.

With my co-PIs we are exploring the use of ABMs for software accountability. The key idea here is that computational verification of software accountability requires a model of the system’s dynamic environment — so why not build the model and test the consequences of the software in silico? So far in this project we have used classic ABM models which do not require training agents, but you could see how the problem expands and overlaps with the economic modeling issues raised above. But this project makes use confront quite directly the basic questions of simulations as a method: how can they be calibrated or validated? When and how should they be relied on for consequential policy decisions?

For fun, I have joined the American Society for Cybernetics, which has recently started a new mailing list for “conversations”. It’s hard to overstate how fascinating cybernetics is as kind of mirror phenomenon to contemporary AI, computer science, and economics. Randy Whitaker, who I’m convinced is the world’s leading expert on the work of Humberto Maturana, is single-handedly worth the price of admission to the mailing list, which is the membership fee of ASC. If you have any curiosity about the work of Maturana and Varela and their ‘biology of cognition’ work, this community is happy to discuss its contextual roots. Many members of ASC knew Maturana and Francisco Varela personally, not to mention others like Gregory Bateson and Heinz von Foerster. My curiosity about ‘what happened to cybernetics?’ has been, perhaps, sated — I hope to write a little about what I’ve learned at some point. Folks at ASC, of course, insist that cybernetics will have its come-back any day now. Very helpfully, through my conversations at ASC I’ve managed to convince myself that many of the more subtle philosophical or psychological questions I’ve had can in fact be modeled using modified versions of the Markov Decision Process framework and other rational agent models, and that there are some very juicy results lying in wait there if I could find the time to write them up.

I’m working hard but feel like at last I’m really making progress on some key problems that have motivated me for a long time. Transitioning to work on computational social simulations a few years ago has scratched an itch that was bothering me all through my graduate school training: mere data science, with its shallowly atheoretic and rigidly empirical approach, to me misses the point on so many research problems, where endogenous causal effects, systemic social structure, and sociotechnical organization are the phenomena of interest. Luckily, the computer science and AI communities seem to be opening up interest in just this kind of modeling, and the general science venues have long supported this line of work. So at last I believe I’ve found my research niche. I just need to keep funded so that these projects can come to fruition!

Buried somewhere in this work are ideas for a product or even a company, and I dream sometimes of building something organizational around this work. A delight of open source software as a research method is that technology transfer is relatively easy. We are hopeful that SHARKFin will have some uptake at a government agency, for example. HARK is still in early stages but I think has the potential to evolve into a very powerful framework for modeling multi-agent systems and stochastic dynamic control, an area of AI that is currently overshadowed by Deep Everything but which I think has great potential in many applications.

Things are bright. My only misgiving is that it took my so long to find and embark on these research problems and methods. I’m impatient with myself, as these are all deep fields with plenty of hardworking experts and specialists that have been doing it for much longer than I have. Luckily I have strong and friendly collaborators who seem to think I have something to offer. It is wonderful to be doing such good work.

Review: Software Development and Reality Construction

I’ve discovered a wonderful book, Floyd et al.’s “Software Development and Reality Construction” (1992). One of the authors has made a PDF available on-line. It represents a strand of innovative thought in system design that I believe has many of the benefits of what has become “critical HCI” in the U.S. without many of its pitfalls. Is is a playful compilation with many interesting intellectual roots.

From its blurb:

The present book is based on the conference Software Development and Reality Construction held at SchloB Eringerfeld in Germany, September 25 – 30, 1988. This was organized by the Technical University of Berlin (TUB) in cooperation with the German National Research Center for Computer Science (GMD), Sankt Augustin, and sponsored by the Volkswagen Foundation whose financial support we gratefully acknowledge. The conference was an interdisciplinary scientific and cultural event aimed at promoting discussion on the nature of computer science as a scientific discipline and on the theoretical foundations and systemic practice required for human-oriented system design. In keeping with the conversational style of the conference, the book comprises a series of individual contributions, arranged so as to form a coherent whole. Some authors reflect on their practice in computer science and system design. Others start from approaches developed in the humanities and the social sciences for understanding human learning and creativity, individual and cooperative work, and the interrelation between technology and organizations. Thus, each contribution makes its specific point and can be read on its own merit. But, at the same time, it takes its place as a chapter in the book, along with all the other contributions, to give what seemed to us a meaningful overall line of argumentation. This required careful editorial coordination, and we are grateful to all the authors for bearing with us throughout the slow genesis of the book and for complying with our requests for extensive revision of some of the manuscripts.

There are a few specific reasons I’m excited about this book.

First, it is explicitly about considering software development as a designing activity that is an aspect of computer science. In many U.S. scholarly contexts, there is an educational/research thrust towards removing “user interface design” from both the theoretical roots of computer science and the applied activity of software development. This has been a problem for recent scholarly debates about, for example, the ethics of data science and AI. When your only options are a humanities oriented “design” field, and a field of computer science “algorithms”, there is no room to explore the embodied practice of software development, which is where the rubber hits the road.

Second, this book has some fascinating authors. It includes essays from Heinz von Foerster, a second-order cybernetics Original Gangster. It also includes essays from Joseph Goguen, who is perhaps the true link between computer science theory (he was a theorist and programming language designer) and second-order cybernetics (Mutarana and Varela, which would then influence Winograd and Flores’s critique of AI, but also Niklas Luhmann, which shows up in other critiques of AI from a legal perspective). Indeed, Goguen co-authored papers with Varela (1979) formalizing Varela’s notions of autonomy and autopoiesis in terms of category theory — a foundation that has had little uptake since. But this is not a fringe production. Donald Knuth, a computer science god-king, has an essay in the book about the process of creating and debugging TeX, the typesetting language. It is perhaps not possible to get deeper into the heart of the embodied practice of technical work than that. His essay begins with a poem from Piet Hein:

The road to wisdom?
Well, it’s plain
and simple to express:
Err
and err
and err again
but less
and less
and less.

The book self-recognizes its interesting intellectual lineage. The following diagram is included in Raeithel’s article “Activity theory as a foundation for design”, which stakes out a Marxist Vygotskian take on design practice. This is positioned as an extreme view, to the (literally, on the page) left of the second-order cybernetics approach, which he positions as culminating in Winograd and Flores.

It is a sweeping, thoughtful book. Any one of its essays could, if more widely read, be a remedy for the kinds of conceptual errors made in today’s technical practice which lead to “unethical” or adverse outcomes. For example, Klein and Lyytinen’s “Towards a new understanding of data modelling” swiftly rejects notions of “raw data” and instead describes a goal oriented, hermeneutic data modeling practice. What if “big data” techniques had been built on this this understanding?

The book ultimately does not take itself too seriously. It has the whimsical character that the field of computer science could have in those early days, when it was open to conceptual freedom and exploration. The book concludes with a script for a fantastic play that captures the themes and ideas of the conference as a whole:

This goes on for six pages. By the end, Alice discovers that she is in a “cyberworld”:

Oh, what fun it is. It’s a huge game that’s being played – all over this
cyberworld – if this is a world at all. How I wish I was one of them! I
wouldn’t mind being a Hacker, if only I might join – though of course I
should like to be a Cyber Queen, best.

I’ve only scratched the surface of this book. But I expect to be returning to it often in future work.

References

Christiane Floyd, Heinz Züllighoven, and Reinhard Budde, Reinhard Keil-Slawik. (1992) “Software development and reality construction.” Springer-Verlag Berlin Heidelberg. https://doi.org/10.1007/978-3-642-76817-0

Goguen, J. A., & Varela, F. J. (1979). Systems and distinctions; duality and complement arity. International Journal of General System5(1), 31-43.

Klein, H. K., & Lyytinen, K. (1992). Towards a new understanding of data modelling. In Software development and reality construction (pp. 203-219). Springer, Berlin, Heidelberg.

Raeithel, A. (1992). Activity theory as a foundation for design. In Software development and reality construction (pp. 391-415). Springer, Berlin, Heidelberg.

instrumental realism and reproducibility in science and society

In Instrumental Realism, Ihde does a complimentary treatment of Ackerman’s Data, Instruments, and Theory (1985), which is positioned as a rebuttal to Kuhn. It is a defense of the idea of scientific progress, which is so disliked by critical scholarship. The key issue is are relativistic attacks on scientific progression that point out, for example, the ways in which theory shapes observation, which undermines the objectivity of observation. Ackerman’s rebuttal is that science does not progress through advance of theory, but rather through advance of instrumentation. Instruments allow data to be collected independently of theory. This creates and bounds “data domains”–fields of “data text” that can then be the site of scientific controversy and resolution.

The paradigmatic scientific instruments in Ackerman’s analysis are the telescope and the microscope. But it’s worthwhile thinking about what this means for the computational tools of “data science”.

Certainly, there has been a great amount of work done on the design and standardization of computational tools, and these tools work with ever increasing speed and robustness.

One of the most controversial points made in research today is the idea that the design and/or of these computational tools encodes some kind of bias that threatens the objectivity of their results.

One story, perhaps a straw man, for how this can happen is this: the creators of these tools have (perhaps unconscious) theoretical presuppositions that are the psychological encoding of political power dynamics. These psychological biases impact their judgment as they use tools. This sociotechnical system is therefore biased as the people in it are biased.

Ackerman’s line of argument suggests that the tools, if well designed, will create a “data domain” that might be interpeted in a biased way, but that this concern is separable from the design of the tools themselves.

A stronger (but then perhaps even harder to defend) argument would be that the tools themselves are designed in such a way that the data domain is biased.

Notably, the question of scientific objectivity depends on a rather complex and therefore obscure supply chain of hardware and software. Locating the bias in it must be extraordinarily difficult. In general, the solution to handling this complexity must be modularity and standardization: each component is responsible for something small and well understood, which provides a “data domain” available for downstream use. This is indeed what the API design of software packages is doing. The individual components are tested for reproducible performance and indeed are so robust that, like most infrastructure, we take them for granted.

The push for “reproducibility” in computational science is a further example of refinement of scientific instruments. Today, we see the effort to provide duplicable computational environments with Docker containers, with preserved random seeds, and appropriately versioned dependencies, so that the results of a particular scientific project are maintained despite the constant churn of software, hardware, and networks that undergird scientific communication and practice (let alone all the other communication and practice it undergirds).

The fetishization of technology today has many searching for the location of societal ills within the modules of this great machine. If society, running on this machine, has a problem, there must be a bug in it somewhere! But the modules are all very well tested. It is far more likely that the bug is in their composition. An integration error.

The solution (if there is a solution, and if there isn’t, why bother?) has to be to instrument the integration.

Considering the Endless Frontier Act

As a scientist/research engineer, I am pretty excited about the Endless Frontier Act. Nothing would make my life easier than a big new pile of government money for basic research and technological prototypes awarded to people with PhDs. I’m absolutely all for it and applaud the bipartisan coalition moving it forward.

I am somewhat concerned, however, that the motivation for it is the U.S.’s fear of technological inferiority with respect to China. I’ll take the statement of Dr. Reif, President of MIT, at face value, which is probably foolish given the political acumen and moral flexibility of academic administrators. But look at this:

The COVID-19 pandemic is intensifying U.S. concerns about China’s technological strength. Unfortunately, much of the resulting policy debate has centered on ways to limit China’s capacities — when what we need most is a systematic approach to strengthening our own.

Very straightforward. This is what it’s about. Ok. I get it. You have to sell it to the Trump administration. It’s a slam dunk. But then why write this:

The aim of the new directorate is to support fundamental scientific research — with specific goals in mind. This is not about solving incremental technical problems. As one example, in artificial intelligence, the focus would not be on further refining current algorithms, but rather on developing profoundly new approaches that would enable machines to “learn” using much smaller data sets — a fundamental advance that would eliminate the need to access immense data sets, an area where China holds an immense advantage. Success in this work would have a double benefit: seeding economic benefits for the U.S. while reducing the pressure to weaken privacy and civil liberties in pursuit of more “training” data.

This sounds totally dubious to me. There are well known mathematical theorems addressing why learning without data is impossible. The troublesome fact nodded to is that is because of the political economy of China, it is possible to collect “immense data sets”–specifically about people–without civil liberties issues getting in the way. This presumes that the civil liberties problem with AI is the collection of data from data subjects, not the use of machine learning on those data subjects. But even if you could magically learn about data subjects without collecting data from them, you wouldn’t bypass the civil liberties concerns. Rather, you would have a nightmare world where even sans data collection you could act with godly foresight in one’s interventions on polity. This is a weird fantasy and I’m pretty sure the only reason it’s written this way is to sell the idea superficially to uncritical readers trying to reconcile the various narratives around U.S., technology, and foreign policy which are incoherent.

What it’s really all about, of course, is neoliberalism. Dr. Reif is not shy about this:

The bill would also encourage universities to experiment with new ways to help accelerate the process of bringing innovative ideas to the marketplace, either via established companies or startups. At MIT we started The Engine, an independent entity that provides private-sector funding, work space and technical assistance to start-ups that are developing technologies with enormous potential but that require more extensive technical development than typical VCs will fund, from fusion energy to a fast, inexpensive test for COVID-19. Other models may suit other institutions — but the nation needs to encourage many more such efforts, across the country, to reap the full benefits of our federal investment in science.

The implication here is that unless the results of federal investment in the sciences can be privatized, the country does not “reap the full benefits” of the federal investment. This makes the whole idea of a massively expanded federal government program make a lot more sense, politically, because it’s a massive redistribution of funds to, ultimately, Big Tech, who can buy up any successful ‘startups’ without any downside investment risk. And Big Tech now runs the country and has found a way to equate its global market share with national security such that these things are now indistinguishable in any statement of U.S. policy.

This would all be fine I guess if not for the fact that science is different from technology in that science is, cannot be, a private endeavor. The only way science works is if you have an open vetting process that is constantly arguing with itself and forcing the scientists to reproduce results. This global competition for scientific prestige through the conference and journal systems is what “keeps it honest”, which is precisely what allows it to be credible. (Bourdieu, Science of Science, 2004)

A U.S. strategy since basically the end of World War II has been to lead the scientific field, get first mover advantage on any discoveries, and reap the benefit of being the center of education for global scientific talent through foreign tuition fees and talented immigrants. Then it wields technology transfer as a magic wand for development.

Now this is backfiring a bit because Chinese science students are returning to China to be entrepreneurial there and also work for the government. The U.S. is discovering that science, being an open system, allows others countries to free ride and this is perhaps bothersome to it. The current administration seems to hate the idea of anybody free-riding off of something the U.S. is doing, though in the past those spillover effects (another name for them!) would have been the basis of U.S. leadership. You can’t really have it both ways.

So the renaming of the NSF to the NSTF–with “technology” next to “science”–is concerning because “technology” investment need not be openly vetted. Rather, given the emphasis on go-to-market strategy, it suggests that the scientific norms of reproducibility will be secondary to privatization through intellectual property laws, including trade secrecy. The could be quite bad, because without a disinterested community of people vetting the results, what you’ll probably get is a lot of industrially pre-captured bullshit.

Let’s acknowledge for a minute that the success of most startups little to do with the quality of the technology made and much to do with path dependency in network growth, marketing, and regulatory arbitrage. If the government starts a VC fund run by engineers with no upside then that money goes into a bunch of startups which then compete for creative destruction of each other until one, large enough based on its cannibalizing of the others, gets consumed by by FAANG company. It will, in other words, look like Silicon Valley today, which is not terribly efficient at discovery because success is measured by the market. I.e., because (as Dr. Reif suggests) the return on investment is realized as capital accumulation.

This is all pretty backwards if what you’re trying to do is maintain scientific superiority. Scientific progress requires a functional economy of symbolic capital among scientists operating with intellectual integrity that is “for its own sake”, not operating at the behest of market conquest. The spillover effects and freeriding in science is a feature, not a bug, and it’s difficult to reconcile it with a foreign policy that is paranoid about technology transfer to its competitors. Indeed, this is one reason why scientists are often aligned with humanitarian causes, world peace, etc.

Science is a good social structure with a lot going for it. I hope the new bill pours more money into it without messing it up too much.

social structure and the private sector

The Human Cell

Academic social scientists leaning towards the public intellectual end of the spectrum love to talk about social norms.

This is perhaps motivated by the fact that these intellectual figures are prominent in the public sphere. The public sphere is where these norms are supposed to solidify, and these intellectuals would like to emphasize their own importance.

I don’t exclude myself from this category of persons. A lot of my work has been about social norms and technology design (Benthall, 2014; Benthall, Gürses and Nissenbaum, 2017)

But I also work in the private sector, and it’s striking how differently things look from that perspective. It’s natural for academics who participate more in the public sphere than the private sector to be biased in their view of social structure. From the perspective of being able to accurately understand what’s going on, you have to think about both at once.

That’s challenging for a lot of reasons, one of which is that the private sector is a lot less transparent than the public sphere. In general the internals of actors in the private sector are not open to the scrutiny of commentariat onlookers. Information is one of the many resources traded in pairwise interactions; when it is divulged, it is divulged strategically, introducing bias. So it’s hard to get a general picture of the private sector, even though accounts for a much larger proportion of the social structure that’s available than the public sphere. In other words, public spheres are highly over-represented in analysis of social structure due to the available of public data about them. That is worrisome from an analytic perspective.

It’s well worth making the point that the public/private dichotomy is problematic. Contextual integrity theory (Nissenbaum, 2009) argues that modern society is differentiated among many distinct spheres, each bound by its own social norms. Nissenbaum actually has a quite different notion of norm formation from, say, Habermas. For Nissenbaum, norms evolve over social history, but may be implicit. Contrast this with Habermas’s view that norms are the result of communicative rationality, which is an explicit and linguistically mediated process. The public sphere is a big deal for Habermas. Nissenbaum, a scholar of privacy, reject’s the idea of the ‘public sphere’ simpliciter. Rather, social spheres self-regulate and privacy, which she defines as appropriate information flow, is maintained when information flows according to these multiple self-regulatory regimes.

I believe Nissenbaum is correct on this point of societal differentiation and norm formation. This nuanced understanding of privacy as the differentiated management of information flow challenges any simplistic notion of the public sphere. Does it challenge a simplistic notion of the private sector?

Naturally, the private sector doesn’t exist in a vacuum. In the modern economy, companies are accountable to the law, especially contract law. They have to pay their taxes. They have to deal with public relations and are regulated as to how they manage information flows internally. Employees can sue their employers, etc. So just as the ‘public sphere’ doesn’t permit a total free-for-all of information flow (some kinds of information flow in public are against social norms!), so too does the ‘private sector’ not involve complete secrecy from the public.

As a hypothesis, we can posit that what makes the private sector different is that the relevant social structures are less open in their relations with each other than they are in the public sphere. We can imagine an autonomous social entity like a biological cell. Internally it may have a lot of interesting structure and organelles. Its membrane prevents this complexity leaking out into the aether, or plasma, or whatever it is that human cells float around in. Indeed, this membrane is necessary for the proper functioning of the organelles, which in turn allows the cell to interact properly with other cells to form a larger organism. Echoes of Francisco Varela.

It’s interesting that this may actually be a quantifiable difference. One way of modeling the difference between the internal and external-facing complexity of an entity is using information theory. The more complex internal state of the entity has higher entropy than the membrane. The fact that the membrane causally mediates interactions between the internals and the environment prevents information flow between them; this is captured by the Data Processing Inequality. The lack of information flow between the system internals and externals is quantified as lower mutual information between the two domains. At zero mutual information, the two domains are statistically independent of each other.

I haven’t worked out all the implications of this.

References

Benthall, Sebastian. (2015) Designing Networked Publics for Communicative Action. Jenny Davis & Nathan Jurgenson (eds.) Theorizing the Web 2014 [Special Issue]. Interface 1.1. (link)

Sebastian Benthall, Seda Gürses and Helen Nissenbaum (2017), “Contextual Integrity through the Lens of Computer Science”, Foundations and Trends® in Privacy and Security: Vol. 2: No. 1, pp 1-69. http://dx.doi.org/10.1561/3300000016

Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

“To be great is to be misunderstood.”

A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines. With consistency a great soul has simply nothing to do. He may as well concern himself with his shadow on the wall. Speak what you think now in hard words, and to-morrow speak what to-morrow thinks in hard words again, though it contradict every thing you said to-day. — `Ah, so you shall be sure to be misunderstood.’ — Is it so bad, then, to be misunderstood? Pythagoras was misunderstood, and Socrates, and Jesus, and Luther, and Copernicus, and Galileo, and Newton, and every pure and wise spirit that ever took flesh. To be great is to be misunderstood. –
Emerson, Self-Reliance

Lately in my serious scientific work again I’ve found myself bumping up against the limits of intelligibility. This time, it is intelligibility from within a technical community: one group of scientists who are, I’ve been advised, unfamiliar with another, different technical formalism. As a new entrant, I believe the latter would be useful to understand the domain of the former. But to do this, especially in the context of funders (who need to explain things to their own bosses in very concrete terms), would be unproductive, a waste of precious time.

Reminded by recent traffic of some notes I wrote long ago in frustration at Hannah Arendt, I found something apt about her comments. Science in the mode of what Kuhn calls “normal science” must be intelligible to itself and its benefactors. But that is all. It need not be generally intelligible to other scientists; it need not understand other scientists. It need only be a specialized and self-sustaining practice, a discipline.

Programming (which I still study) is actually quite different from science in this respect. Because software code is a medium used for communication by programmers, and software code is foremost interpreted by a compiler, one relates as a programmer to other programmers differently than the way scientists relate to other scientists. To some extent the productive formal work has moved over into software, leaving science to be less formal and more empirical. This is, in my anecdotal experience, now true even in the fields of computer science, which were once one of the bastions of formalism.

Arendt’s criticism of scientists, that should be politically distrusted because “they move in a world where speech has lost its power”, is therefore not precisely true because scientific operations are, certainly, mediated by language.

But this is normal science. Perhaps the scientists who Arendt distrusted politically were not normal scientists, but rather those sorts of scientists that were responsible for scientific revolutions. These scientist must not have used language that was readily understood by their peers, at least initially, because they were creating new concepts, new ideas.

Perhaps these kinds of scientists are better served by existentialism, as in Nietzsche’s brand, as an alternative to politics. Or by Emerson’s transcendentalism, which Sloterdijk sees as very spiritually kindred to Nietzsche but more balanced.

Three possibilities of political agency in an economy of control

I wrote earlier about three modes of social explanation: functionality, which explains a social phenomenon in terms of what it optimizes; politics, which explains a social phenomenon in terms of multiple agents working to optimize different goals; and chaos, which explains a social phenomenon in terms of the happenings of chance, independent of the will of any agent.

A couple notes on this before I go on. First, this view of social explanation is intentionally aligned with mathematical theories of agency widely used in what is broadly considered ‘artificial intelligence’ research and even more broadly  acknowledged under the rubrics of economics, cognitive science, multi-agent systems research, and the like. I am willfully opting into the hegemonic paradigm here. If years in graduate school at Berkeley have taught me one pearl of wisdom, it’s this: it’s hegemonic for a reason.

A second note is that when I say “social explanation”, what I really mean is “sociotechnical explanation”. This is awkward, because the only reason I have to make this point is because of an artificial distinction between technology and society that exists much more as a social distinction between technologists and–what should one call them?–socialites than as an actual ontological distinction. Engineers can, must, and do constantly engage societal pressures; they must bracket of these pressures in some aspects of their work to achieve the specific demands of engineering. Socialites can, must, and do adopt and use technologies in every aspect of their lives; they must bracket these technologies in some aspects of their lives in order to achieve the specific demands of mastering social fashions. The social scientist, qua socialite who masters specific social rituals, and the technologist, qua engineer who masters a specific aspect of nature, naturally advertise their mastery as autonomous and complete. The social scholar of technology, qua socialite engaged in arbitrage between communities of socialites and communities of technologists, naturally advertises their mastery as an enlightened view over and above the advertisements of the technologists. To the extent this is all mere advertising, it is all mere nonsense. Currency, for example, is surely a technology; it is also surely an artifact of socialization as much if not more than it is a material artifact. Since the truly ancient invention of currency and its pervasiveness through the fabric of social life, there has been no society that is not sociotechnical, and there has been no technology that is is not sociotechnical. A better word for the sociotechnical would be one that indicates its triviality, how it actually carries no specific meaning at all. It signals only that one has matured to the point that one disbelieves advertisements. We are speaking scientifically now.

With that out of the way…I have proposed three modes of explanation: functionality, politics, and chaos. They refer to specific distributions of control throughout a social system. The first refers to the capacity of the system for self-control. The second refers to the capacity of the components of the system for self-control. The third refers to the absence of control.

I’ve written elsewhere about my interest in the economy of control, or in economies of control, plurally. Perhaps the best way to go about studying this would be an in depth review of the available literature on information economics. Sadly, I am at this point a bit removed from this literature, having gone down a number of other rabbit holes. In as much as intellectual progress can be made by blazing novel trails through the wilderness of ideas, I’m intent on documenting my path back to the rationalistic homeland from which I’ve wandered. Perhaps I bring spices. Perhaps I bring disease.

One of the questions I bring with me is the question of political agency. Is there a mathematical operationalization of this concept? I don’t know it. What I do know is that it is associated most with the political mode of explanation, because this mode of explanation allows for the existence of politics, by which I mean agents engaged in complex interactions for their individual and sometimes collective gain. Perhaps it is the emerging dynamics of the individual’s shifting constitution as collectives that captures best what is interesting about politics. These collectives serve functions, surely, but what function? Is it a function with any permanence or real agency? Or is it a specious functionality, only a compromise of the agents that compose it, ready to be sabotaged by a defector at any moment?

Another question I’m interested in is how chaos plays a role in such an economy of control. There is plenty of evidence to suggest that entropy in society, far from being a purely natural consequence of thermodynamics, is a deliberate consequence of political activity. Brunton and Nissenbaum have recently given the name obfuscation to some kinds of political activity that are designed to mislead and misdirect. I believe this is not the only reason why agents in the economy of control work actively to undermine each others control. To some extent, the distribution of control over social outcomes is zero sum. It is certainly so at the Pareto boundary of such distributions. But I posit that part of what makes economies of control interesting is that they have a non-Euclidean geometry that confounds the simple aggregations that make Pareto optimality a useful concept within it. Whether this hunch can be put persuasively remains to be seen.

What I may be able to say now is this: there is a sense in which political agency in an economy of control is self-referential, in that what is at stake for each agent is not utility defined exogenously to the economy, but rather agency defined endogenously to the economy. This gives economic activity within it a particularly political character. For purposes of explanation, this enables us to consider three different modes of political agency (or should I say political action), corresponding to the three modes of social explanation outlined above.

A political agent may concern itself with seizing control. It may take actions which are intended to direct the functional orientation of the total social system of which it is a part to be responsive to its own functional orientation. One might see this narrowly as adapting the total system’s utility function to be in line with one’s own, but this is to partially miss the point. It is to align the agency of the total system with one’s one, or to make the total system a subsidiary to one’s agency.  (This demands further formalization.)

A political agent may instead be concerned with interaction with other agents in a less commanding way. I’ll call this negotiation for now. The autonomy of other agents is respected, but the political agent attempts a coordination between itself and others for the purpose of advancing its own interests (its own agency, its own utility). This is not a coup d’etat. It’s business as usual.

A political agent can also attempt to actively introduce chaos into its own social system. This is sabotage. It is an essentially disruptive maneuver. It is action aimed to cause the death of function and bring about instead emergence, which is the more positive way of characterizing the outcomes of chaos.