Category: academia

Fascinated by Vijay Narayanan’s talk at #DataEDGE

As I write this I’m watching Vijay Narayanan’s, Director of Algorithms and Data Science Solutions at Microsoft, talk at the DataEDGE conference at UC Berkeley.

The talk is about “The Data Science Economy.” It began with a history of the evolution of the human centralized nervous system. He then went on to show the centralizing trend of the data economy. Data collection will be become more mobile, data processing will be done in the cloud. This data will be sifted by software and used to power a marketplace of services, which ultimately deliver intelligence to their users.

It was wonderful to see somebody so in the know reaffirming what has been a suspicion I’ve had since starting graduate school but have found little support for in the academic setting. The suspicion is that what’s needed to accurately model the data science economy is a synthesis of cognitive science and economics that can show the comparative market value and competitiveness of different services.

This is not out of the mainline of information technology, management science, computer science, and other associated disciplines that have been at the nexus of business and academia for 70 years. It’s an intellectual tradition that’s rooted in the 1940’s cybernetics vision of Norbert Wiener and was going strong in the social sciences as late as Beniger‘s The Control Revolution, which, like Narayanan, draws an explicit connection between information processing in the brain and information processing in the microprocessor–notably while acknowledging the intermediary step of bureaucracy as a large-scale information processing system.

There’s significant cross-pollination between engineering, economics, computer science, and cognitive psychology. I’ve read papers from, say, the Education field in the late 80’s and early 90’s that refers to this collectively as “the dominant paradigm”. At UC Berkeley today, it’s fascinating to see a departmental politics play out over ‘data science’ that echoes some of these concerns that a powerful alliance of ideas are getting mobilized by industry and governments while other disciplines are struggling to find relevance.

It’s possible that these specialized disciplinary discourses are important for the cultivation of thought that is important for its insight despite being fundamentally impractical. I’m coming to a different view: that maybe the ‘dominant paradigm’ is dominant because it is scientifically true, and that other disciplinary orientations are suffering because they are based on unsound theory. If disciplines that are ‘dominated’ by another paradigm are floundering because they are, to put it simply, wrong, then that is a very elegant explanation for what’s going on.

The ramification of this is that what’s needed is not a number of alternatives to ‘the dominant paradignm’. What’s needed is that scholars double down on the dominant paradigm and learn how to express in its logic the complexities and nuances that the other disciplines have been designed to capture. What we can hope for, in terms of intellectual continuity, is the preservation of what’s best of older ideas in a creative synthesis with the foundational principles of computer science and mathematical biology.

going post-ideology

I’ve spent a lot of my intellectual life in the grips of ideology.

I’m glad to be getting past all of that. That’s one reason why I am so happy to be part of Glass Bead Labs.

Glass Bead Labs

There are a lot of people who believe that it’s impossible to get beyond ideology. They believe that all knowledge is political and nothing can be known with true clarity.

I’m excited to have an opportunity to try to prove them wrong.

data science and the university

This is by now a familiar line of thought but it has just now struck me with clarity I wanted to jot down.

  1. Code is law, so the full weight of human inquiry should be brought to bear on software system design.
  2. (1) has been understood by “hackers” for years but has only recently been accepted by academics.
  3. (2) is due to disciplinary restrictions within the academy.
  4. (3) is due to the incentive structure of the academy.
  5. Since there are incentive structures for software development that are not available for subjects whose primary research project is writing, the institutional conditions that are best able to support software work and academic writing work are different.
  6. Software is a more precise and efficious way of communicating ideas than writing because its interpretation is guaranteed by programming language semantics.
  7. Because of (6), there is selective pressure to making software the lingua franca of scholarly work.
  8. (7) is inducing a cross-disciplinary paradigm shift in methods.
  9. (9) may induce a paradigm shift in theoretical content, or it may result in science whose contents are tailored to the efficient execution of adaptive systems. (This is not to say that such systems are necessarily atheoretic, just that they are subject to different epistemic considerations).
  10. Institutions are slow to change. That’s what makes them institutions.
  11. By (5), (7), and (9), the role of universities as the center of research is being threatened existentially.
  12. But by (1), the myriad intellectual threads currently housed in universities are necessary for software system design, or are at least potentially important.
  13. With (11) and (12), a priority is figuring out how to manage a transition to software-based scholarship without information loss.

a brief comment on feminist epistemology

One funny thing about having a blog is that I can tell when people are interested in particular posts through the site analytics. To my surprise, this post about Donna Haraway has been getting an increasing number of hits each month since I posted it. That is an indication that it has struck a chord, since steady exogenous growth like that is actually quite rare.

It is just possible that this means that people interested in feminist epistemology have been reading my blog lately. They probably have correctly guessed that I have not been the biggest fan of feminist epistemology because of concerns about bias.

But I’d like to take the opportunity to say that my friend Rachel McKinney has been recommending I read Elizabeth Anderson‘s stuff if I want to really get to know this body of theory. Since Rachel is an actual philosopher and I am an amateur who blogs about it on weekends, I respect her opinion on this a great deal.

So today I started reading through Anderson’s Stanford Encyclopedia of Philosophy article on Feminist Epistemology and I have to say I think it’s very good. I like her treatment of the situated knower. It’s also nice to learn that there are alternative feminist epistemologies to certain standpoint theories that I think are troublesome. In particular, it turns out that those standpoint theories are now considered by feminist philosophers to from a brief period in the 80’s that they’ve moved past already! Now subaltern standpoints are considered privileged in terms of discovery more than privileged in terms of justification.

This position is certainly easier to reconcile with computational methods. For example, it’s in a sense just mathematically mathematically correct if you think about it in terms of information gain from a sample. This principle appears to have been rediscovered in a way recently by the equity-in-data-science people when people talk about potential classifier error.

I’ve got some qualms about the articulation of this learning principle in the absence of a particular inquiry or decision problem because I think there’s still a subtle shift in the argumentation from logos to ethos embedded in there (I’ve been seeing things through the lens of Aristotelian rhetoric lately and it’s been surprisingly illuminating). I’m on the lookout for a concrete application of where this could apply in a technical domain, as opposed to as an articulation of a political affinity or anxiety in the language of algorithms. I’d be grateful for links in the comments.


Wait, maybe I already built one. I am not sure if that really counts.

Horkheimer and Wiener

[I began writing this weeks ago and never finished it. I’m posting it here in its unfinished form just because.]

I think I may be condemning myself to irrelevance by reading so many books. But as I make an effort to read up on the foundational literature of today’s major intellectual traditions, I can’t help but be impressed by the richness of their insight. Something has been lost.

I’m currently reading Norbert Wiener’s The Human Use of Human Beings (1950) and Max Horkheimer’s Eclipse of Reason (1947). The former I am reading for the Berkeley School of Information Classics reading group. Norbert Wiener was one of the foundational mathematicians of 20th century information technology, a colleague of Claude Shannon. Out of his own sense of social responsibility, he articulated his predictions for the consequences of the technology he developed in Human Use. This work was the foundation of cybernetics, an influential school of thought in the 20th century. Terrell Bynum, in his Stanford Encyclopedia of Philosophy article on “Computer and Information Ethics“, attributes to Wiener’s cybernetics the foundation of all future computer ethics. (I think that the threads go back earlier, at least through to Heidegger’s Question Concerning Technology.) It is hard to find a straight answer to the question of what happened to cybernetics?. By some reports, the artificial intelligence community cut their NSF funding in the 60’s.

Horkheimer is one of the major thinkers of the very influential Frankfurt School, the postwar social theorists at the core of intellectual critical theory. Of the Frankfurt School, perhaps the most famous in the United States is Adorno. Adorno is also the most caustic and depressed, and unfortunately much of popular critical theory now takes on his character. Horkheimer is more level-headed. Eclipse of Reason is an argument about the ways that philosophical empiricism and pragmatism became complicit in fascism. Here is an interested quotation.

It is very interesting to read them side by side. Published only a few years apart, Wiener and Horkheimer are giants of two very different intellectual traditions. There’s little reason to expect they ever communicated (a more thorough historian would know more). But each makes sweeping claims about society, language, and technology and contextualizes them in broader intellectual awareness of religion, history and science.

Horkheimer writes about how the collapse of the Enlightment project of objective reason has opened the way for a society ruled by subjective reason, which he characterizes as the reason of formal mathematics and scientific thinking that is neutral to its content. It is instrumental thinking in its purest, most rigorous form. His descriptions of it sound like gestures to what we today call “data science”–a set of mechanical techniques that we can use to analyze and classify anything, perfecting our understanding of technical probabilities towards whatever ends one likes.

I find this a more powerful critique of data science than recent paranoia about “algorithms”. It is frustrating to read something over sixty years old that covers the same ground as we are going over again today but with more composure. Mathematized reasoning about the world is an early 20th century phenomenon and automated computation a mid-20th century phenomenon. The disparities in power that result from the deployment of these tools were thoroughly discussed at the time.

But today, at least in my own intellectual climate, it’s common to hear a mention of “logic” with the rebuttal “whose logic?“. Multiculturalism and standpoint epistemology, profoundly important for sensitizing researchers to bias, are taken to an extreme the glorifies technical ignorance. If the foundation of knowledge is in ones lived experience, as these ideologies purport, and one does not understand the technical logic used so effectively by dominant identity groups, then one can dismiss technical logic as merely a cultural logic of an opposing identity group. I experience the technically competent person as the Other and cannot perceive their actions as skill but only as power and in particular power over me. Because my lived experience is my surest guide, what I experience must be so!

It is simply tragic that the education system has promoted this kind of thinking so much that it pervades even mainstream journalism. This is tragic for reasons I’ve expressed in “objectivity is powerful“. One solution is to provide more accessible accounts of the lived experience of technicality through qualitative reporting, which I have attempted in “technical work“.

But the real problem is that the kind of formal logic that is at the foundation of modern scientific thought, including its most recent manifestation ‘data science’, is at its heart perfectly abstract and so cannot be captured by accounts of observed practices or lived experience. It is reason or thought. Is it disembodied? Not exactly. But at least according to constructivist accounts of mathematical knowledge, which occupy a fortunate dialectical position in this debate, mathematical insight is built from embodied phenomenological primitives but by their psychological construction are abstract. This process makes it possible for people to learn abstract principles such as the mathematical theory of information on which so much of the contemporary telecommunications and artificial intelligence apparatus depends. These are the abstract principles with which the mathematician Norbert Wiener was so intimately familiar.

Horkheimer, pragmatism, and cognitive ecology

In Eclipse of Reason, Horkheimer rips into the American pragmatists Peirce, James, and Dewey like nobody I’ve ever read. Normally seen as reasonable and benign, Horkheimer paints these figures as ignorant and undermining of the whole social order.

The reason is that he believes that they reduce epistemology to a kind a instrumentalism. But that’s selling their position a bit short. Dewey’s moral epistemology is pragmatist in that it is driven by particular, situated interests and concerns, but these are ingredients to moral inquiry and not conclusions in themselves.

So to the extent that Horkheimer is looking to dialectic reason as the grounds to uncovering objective truths, Dewey’s emphasis on the establishing institutions that allow for meaningful moral inquiry seems consistent with Horkheimer’s view. The difference is in whether the dialectics are trancendental (as for Kant) or immanent (as for Hegel?).

The tension around objectivity in epistemology that comes up in the present academic environment is that all claims to objectivity are necessarily situated and this situatedness is raised as a challenge to their objective status. If the claims or their justification depend on conditions that exclude some subjects (as they no doubt do; whether or not dialectical reason is transcendental or immanent is requires opportunities for reflection that are rare–privileged), can these conclusions be said to be true for all subjects?

The Friendly AI research program more or less assumes that yes, this is the case. Yudkowsky’s notion of Coherent Extrapolated Volition–the position arrived at by simulated, idealized reasoners, is a 21st century remake of Peirce’s limiting consensus of the rational. And yet the cry from standpoint theorists and certain anthropologically inspired disciplines is a recognition of the validity of partial perspectives. Haraway, for example, calls for an alliance of partial perspectives. Critical and adversarial design folks appear to have picked up this baton. Their vision is of a future of constantly vying (“agonistic”) partiality, with no perspective presuming to be settled, objective or complete.

If we make cognitivist assumptions about the computationality of all epistemic agents, then we are forced to acknowledge the finiteness of all actually existing reasoning. Finite capacity and situatedness become two sides of the same coin. Partiality, then, becomes a function of both ones place in the network (eccentricity vs. centrality) as well as capacity to integrate information from the periphery. Those locations in the network most able to valuably integrate information, whether they be Google’s data centers or the conversational hubs of research universities, are more impartial, more objective. But they can never be the complete system. Because of their finite capacity, their representations can at best be lossy compressions of the whole.

Horkheimer dreams of an objective truth obtainable by a single subject through transcendental dialectic. Perhaps he thinks this is unattainable today (I have to read on). But if there’s hope in this vision, it seems to me it must come from one of two possibilities:

  • The fortuitously fractal structure of the sociotechnical world such that an adequate representation of it can be maintained in its epistemic hubs through quining, or
  • A generative grammar or modeling language of cognitive ecology such that we can get insights into the larger interactive system from toy models, and apply these simplified models pragmatically in specific cases. For this to work and not suffer the same failures as theoretical economics, these models need to have empirical content. Something like Wolpert, Lee, and Bono’s Predictive Game Theory (for which I just discovered they’ve released a Python package…cool!) may be critical here.

Eclipse of Reason

I’m starting to read Max Horkheimer’s Eclipse of Reason. I have had high hopes for it and have not been disappointed.

The distinction Horkheimer draws in the first section, “Means and Ends”, is between subjective reason and objective reason.

Subjective reason is the kind of reasoning that is used to most efficiently achieve ones goals, whatever they are. Writing even as early as 1947, Horkheimer notes that subjective reason has become formalized and reduced to the computation of technical probabilities. He is referring to the formalization of logic in the Anglophone tradition by Russell and Whitehead and its use in early computer science, most likely. (See Imre Lakatos and programming as dialectic for more background on this, as well as resonant material on where this is going)

Objective reason is, within a simple “means/ends” binary, most simply described as the reasoning of ends. I am not very far through the book and Horkheimer is so far unspecific about what this entails in practice but instead articulates it as an idea that has fallen out of use. He associates it with Platonic forms. With logos–a word that becomes especially charged for me around Christmas and whose religious connotations are certainly intertwined with the idea of objectivity. Since it is objective and not bound to a particular subject, the rationality of correct ends is the rationality of the whole world or universe, it’s proper ordering or harmony. Humanity’s understanding of it is not a technical accomplishment so much an achievement of revelation or wisdom achieved–and I think this is Horkheimer’s Hegelian/Marxist twist–dialectically.

Horkheimer in 1947 believes that subjective reason, and specifically its formalization, have undermined objective reason by exposing its mythological origins. While we have countless traditions still based in old ideologies that give us shared values and norms simply out of habit, they have been exposed as superstition. And so while our ability to achieve our goals has been amplified, our ability to have goals with intellectual integrity has hollowed out. This is a crisis.

One reason this is a crisis is because (to paraphrase) the functions once performed by objectivity or authoritarian religion or metaphysics are now taken on by the reifying apparatus of the market. This is a Marxist critique that is apropos today.

It is not hard to see that Horkheimer’s critique of “formalized subjective reason” extends to the wide use of computational statistics or “data science” in the vast ways it is now. Moreover, it’s easy to see how the “Internet of Things” and everything else instrumented–the Facebook user interface, this blog post, everything else–participates in this reifying market apparatus. Every critique of the Internet and the data economy from the past five years has just been a reiteration of Horkheimer, whose warning came loud and clear in the 40’s.

Moreover, the anxieties of the “apocalyptic libertarians” of Sam Franks article, the Less Wrong theorists of friendly and unfriendly Artificial intelligence, are straight out of the old books of the Frankfurt School. Ironically, todays “rationalists” have no awareness of the broader history of rationality. Rather, their version of rationality begins with Von Neummann, and ends with two kinds of rationality, “epistemic rationality”, about determining correct beliefs, and “instrumental rationality”, about correctly reaching ones ends. Both are formal and subjective, in Horkheimer’s analysis; they don’t even have a word for ‘objective reason’, it has so far fallen away from their awareness of what is intellectually possible.

But the consequence is that this same community lives in fear of the unfriendly AI–a superintelligence driven by a “utility function” so inhuman that it creates a dystopia. Unarmed with the tools of Marxist criticism, they are unable to see the present economic system as precisely that inhuman superintelligence, a monster bricolage of formally reasoning market apparati.

For Horkheimer (and I’m talking out of my butt a little here because I haven’t read enough of the book to really know; I’m going on some context I’ve read up on early) the formalization and automation of reason is part of the problem. Having a computer think for you is very different from actually thinking. The latter is psychologically transformative in ways that the former is not. It is hard for me to tell whether Horkheimer would prefer things to go back the way they were, or if he thinks that we must resign ourselves to a bleak inhuman future, or what.

My own view, which I am worried is deeply quixotic, is that a formalization of objective reason would allow us to achieve its conditions faster. You could say I’m a logos-accelerationist. However, if the way to achieve objective reason is dialectically, then this requires a mathematical formalization of dialectic. That’s shooting the moon.

This is not entirely unlike the goals and position of MIRI in a number of ways except that I think I’ve got some deep intellectual disagreements about their formulation of the problem.

Reflecting on “Technoscience and Expressionism” by @FractalOntology

I’ve come across Joseph Weissman’s (@FractalOntology) “Technoscience and Expressionism” and am grateful for it, as its filled me in on a philosophical position that I missed the first time around, accelerationism. I’m not a Deleuzian and prefer my analytic texts to plod, so I can’t say I understood all of the essay. On the other hand, I gather the angle of this kind of philosophizing is intentionally psychotherapeutic and hence serves and artistic/literary function rather than one that explicitly guides praxis.

I am curious about the essay because I would like to see a thorough analysis of the political possibilities for the 21st century that gets past 20th century tropes. The passions of journalistic and intellectual debate have an atavistic tendency due to a lack of imagination that I would like to avoid in my own life and work.

Accelerationism looks new. It was pronounced in a manifesto, which is a good start.

Here is a quote from it:

Democracy cannot be defined simply by its means — not via voting, discussion, or general assemblies. Real democracy must be defined by its goal — collective self-​mastery. This is a project which must align politics with the legacy of the Enlightenment, to the extent that it is only through harnessing our ability to understand ourselves and our world better (our social, technical, economic, psychological world) that we can come to rule ourselves. We need to posit a collectively controlled legitimate vertical authority in addition to distributed horizontal forms of sociality, to avoid becoming the slaves of either a tyrannical totalitarian centralism or a capricious emergent order beyond our control. The command of The Plan must be married to the improvised order of The Network.

Hell yeah, the Enlightenment! Sign me up!

The manifesto calls for an end to the left’s emphasis on local action, transparency, and direct democracy. Rather, it calls for a muscular hegemonic left that fully employs and deploys “technoscience”.

It is good to be able to name this political tendency and distinguish it from other left tendencies. It is also good to distinguish it from “right accelerationism”, which Weissman identifies with billionaires who want to create exurb communities.

A left-accelerationist impulse is today playing out dramatically against a right-accelerationist one. And the right-accelerationists are about as dangerous as you may imagine. With silicon valley VCs, and libertarian technologists more generally reading Nick Land on geopolitical fragmentation, the reception or at least receptivity to hard-right accelerants seems problematically open (and the recent $2M campaign proposing the segmentation of California into six microstates seems to provide some evidence for this.) Billionaires consuming hard-right accelerationist materials arguing for hyper-secessionism undoubtedly amounts to a critically dangerous situation. I suspect that the right-accelerationist materials, perspectives, affect, energy expresses a similar shadow, if it is not partly what is catalyzing the resurgence of micro-fascisms elsewhere (and macro ones as well — perhaps most significant to my mind here is the overlap of right-acceleration with white nationalism, and more generally what is deplorably and disingenuously called “race realism” — and is of course simply racism; consider Marine le Pen’s fascist front, which recently won 25% of the seats in the French parliament, UKIP’s resurgence in Great Britain; while we may not hear accelerationist allegiances and watchwords explicitly, the political implications and continuity is at the very least somewhat unsettling…)

There is an unfortunate conflation of several different points of view here. It is too easy to associate racism, wealth, and libertarianism as these are the nightmares of the left’s political imagination. If ideological writing is therapeutic, a way of articulating ones dreams, then this is entirely appropriate with a caveat. The caveat being that every nightmare is a creation of ones own psychology more so than a reflection of the real world.

The same elisions are made by Sam Frank in his recent article thematizing Silicon Valley libertarianism, friendly artificial intelligence research, and contemporary rationalism as a self-help technique. There are interesting organizational ties between these institutions that are validly worth investigating but it would be lazy to collapse vast swathes of the intellectual spectrum into binaries.

In March 2013 I wrote about the Bay Area Rationalists:

There is a good story here, somewhere. If I were a journalist, I would get in on this and publish something about it, just because there is such a great opportunity for sensationalist exploitation.

I would like to say “I called it”–Sam Frank has recently written just such a sensationalist, exploitative piece in Harper’s Magazine. It is thoroughly enjoyable and I wouldn’t say it’s inaccurate. But I don’t think this is the best way to get to know these people. A better one is to attend a CFAR workshop. It used to be that you could avoid the fee with a promise to volunteer, but that there was a money-back guarantee which extended to ones promise to volunteer. If that’s still the case, then one can essentially attend for free.

Another way to engage this community intellectually, which I would encourage the left accelerationists to do because it’s interesting, is to start participating on LessWrong. For some reason this community is not subject to ideological raids like so many other community platforms. I think it could stand for an influx of Deleuze.

Ultimately the left/right divide comes down to a question of distribution of resources and/or surplus. Left accelerationist tactics appear from here to be a more viable way of seizing resources than direct democracy. However, the question is whether accelerationist tactics inevitably result in inequalities that create control structures of the kind originally objected to. In other words, this may simply be politics as usual and nothing radical at all.

So there’s an intersection between these considerations (accelerationist vs. … decelerationism? Capital accumulation vs. capital redistribution?) and the question of decentralization of decision-making process (is that the managerialism vs. multistakeholderism divide?) whose logic is unclear to me. I want to know which affinities are necessary and which are merely contingent.

Discourse theory of law from Habermas

There has been at least one major gap in my understanding of Habermas’s social theory which I’m just filling now. The position Habermas reaches towards the end of Theory of Communicative Action vol 2 and develops further in later work in Between Facts and Norms (1992) is the discourse theory of law.

What I think went on is that Habermas eventually gave up on deliberative democracy in its purest form. After a career of scholarship about the public sphere, the ideal speech situation, and communicative action–fully developing the lifeworld as the ground for legitimate norms–but eventually had to make a concession to “the steering media” of money and power as necessary for the organization of society at scale. But at the intersection between lifeworld and system is law. Lawserves as a transmission belt between legitimate norms established by civil society and “system”; at it’s best it is both efficacious and legitimate.

Law is ambiguous; it can serve both legitimate citizen interests united in communicative solidarity. It can also serve strong powerful interests. But it’s where the action is, because it’s where Habermas sees the ability for lifeworld to counter-steer the whole political apparatus towards legitimacy, including shifting the balance of power between lifeworld and system.

This is interesting because:

  • Habermas is like the last living heir of the Frankfurt School mission and this is a mature and actionable view nevertheless founded in the Critical Theory tradition.
  • If you pair it with Lessig’s Code is Law thesis, you get a framework for thinking about how technical mediation of civil society can be legitimate but also efficacious. I.e., code can be legitimized discoursively through communicative action. Arguably, this is how a lot of open source communities work, as well as standards bodies.
  • Thinking about managerialism as a system of centralized power that provides a framework of freedoms within it, Habermas seems to be presenting an alternative model where law or code evolves with the direct input of civil stakeholders. I’m fascinated by where Nick Doty’s work on multistakeholderism in the W3C is going and think there’s an alternative model in there somewhere. There’s a deep consistency in this, noted a while ago (2003) by Froomkin but largely unacknowledged as far as I can tell in the Data and Society or Berkman worlds.

I don’t see in Habermas anything about funding the state. That would mean acknowledging military force and the power to tax. But this is progress for me.


Zurn, Christopher. “Discourse theory of law”, in Jurgen Habermas: Key Concepts, edited by Barbara Fultner

Some research questions

Last week was so interesting. Some weeks you just get exposed to so many different ideas that it’s trouble to integrate them. I tried to articulate what’s been coming up as a result. It’s several difficult questions.

  • Assuming trust is necessary for effective context management, how does one organize sociotechnical systems to provide social equity in a sustainable way?
  • Assuming an ecology of scientific practices, what are appropriate selection mechanisms (or criteria)? Are they transcendent or immanent?
  • Given the contradictory character of emotional reality, how can psychic integration occur without rendering one dead or at least very boring?
  • Are there limitations of the computational paradigm imposed by data science as an emerging pan-constructivist practice coextensive with the limits of cognitive or phenomenological primitives?

Some notes:

  • I think that two or three of these questions above may be in essence the same question. In that they can be formalized into the same mathematical problem, and the solution is the same in each case.
  • I really do have to read Isabelle Stengers and Nancy Nersessian. Based on the signals I’m getting, they seem to be the people most on top of their game in terms of understanding how science happens.
  • I’ve been assuming that trust relations are interpersonal but I suppose they can be interorganizational as well, or between a person and an organization. This gets back to a problem I struggle with in a recurring way: how do you account for causal relationships between a macro-organism (like an organization or company) and a micro-organism? I think it’s when there are entanglements between these kinds of entities that we are inclined to call something an “ecosystem”, though I learned recently that this use of the term bothers actual ecologists (no surprise there). The only things I know about ecology are from reading Ulanowicz papers, but those have been so on point and beautiful that I feel I can proceed with confidence anyway.
  • I don’t think there’s any way to get around having at least a psychological model to work with when looking at these sorts of things. A recurring an promising angle is that of psychic integration. Carl Jung, who has inspired clinical practices that I can personally vouch for, and Gregory Bateson both understood the goal of personal growth to be integration of disparate elements. I’ve learned recently from Turner’s The Democratic Surround that Bateson was a more significant historical figure than I thought, unless Turner’s account of history is a glorification of intellectuals that appeal to him, which is entirely possible. Perhaps more importantly to me, Bateson inspired Ulanowicz, and so these theories are compatible; Bateson was also a cyberneticist following Wiener, who was prescient and either foundational to contemporary data science or a good articulator of its roots. But there is also a tie-in to constructivist epistemology. DiSessa’s epistemology, building on Piaget but embracing what he calls the computational metaphor, understands the learning of math and physics as the integration of phenomenological primitives.
  • The purpose of all this is ultimately protocol design.
  • This does not pertain directly to my dissertation, though I think it’s useful orienting context.

Get every new post delivered to your Inbox.

Join 996 other followers