Digifesto

Category: Uncategorized

Notes on Krussell & Smith, 1998 and macroeconomic theory

I’m orienting towards a new field through my work on HARK. A key paper in this field is Krusell and Smith, 1998 “Income and wealth heterogeneity in the macroeconomy.” The learning curve here is quite steep. These are, as usual, my notes as I work with this new material.

Krusell and Smith are approaching the problem of macroeconomic modeling on a broad foundation. Within this paradigm, the economy is imagined as a large collection of people/households/consumers/laborers. These exist at a high level of abstraction and are imagined to be intergenerationally linked. A household might be an immortal dynasty.

There is only one good: capital. Capital works in an interesting way in the model. It is produced every time period by a combination of labor and other capital. It is distributed to the households, apportioned as both a return on household capital and as a wage for labor. It is also consumed each period, for the utility of the households. So all the capital that exists does so because it was created by labor in a prior period, but then saved from immediate consumption, then reinvested.

In other words, capital in this case is essentially money. All other “goods” are abstracted way into this single form of capital. The key thing about money is that it can be saved and reinvested, or consumed for immediate utility.

Households also can labor, when they have a job. There is an unemployment rate and in the model households are uniformly likely to be employed or not, no matter how much money they have. The wage return on labor is determined by an aggregate economic productivity function. There are good and bad economic periods. These are determine exogenously and randomly. There are good times and bad times; employment rates are determined accordingly. One major impetus for saving is insurance for bad times.

The problem raised by Krusell and Smith in this, what they call their ‘baseline model’, is that because all households are the same, the equilibrium distribution of wealth is far too even compared with realistic data. It’s more normally distributed than log-normally distributed. This is implicitly a critique at all prior macroeconomics, which had used the “representative agent” assumption. All agents were represented by one agent. So all agents are approximately as wealthy as all others.

Obviously, this is not the case. This work was done in the late 90’s, when the topic of wealth inequality was not nearly as front-and-center as it is in, say, today’s election cycle. It’s interesting that one reason why it might have not been front and center was because prior to 1998, mainstream macroeconomic theory didn’t have an account of how there could be such inequality.

The Krusell-Smith model’s explanation for inequality is, it must be said, a politically conservative one. They introduce minute differences in utility discount factor. The discount factor is how much you discount future utility compared to today’s utility. If you have a big discount factor, you’re going to want to consume more today. If you have a small discount factor, you’re more willing to save for tomorrow.

Krussell and Smith show that teeny tiny differences in discount factor, even if they are subject to a random walk around a mean with some persistence within households, leads to huge wealth disparities. Their conclusion is that “Poor households are poor because they’ve chosen to be poor”, by not saving more for the future.

I’ve heard, like one does, all kinds of critiques of Economics as an ideological discipline. It’s striking to read a landmark paper in the field with this conclusion. It strikes directly against other mainstream political narratives. For example, there is no accounting of “privilege” or inter-generational transfer of social capital in this model. And while they acknowledge that in other papers there is the discussion of whether having larger amounts of household capital leads to larger rates of return, Kruselll and Smith sidestep this and make it about household saving.

The tools and methods in the paper are quite fascinating. I’m looking forward to more work in this domain.

References

Krusell, P., & Smith, Jr, A. A. (1998). Income and wealth heterogeneity in the macroeconomy. Journal of political Economy106(5), 867-896.

ethnography is not the only social science tool for algorithmic impact assessment

Quickly responding to Selbst, Elish, and Latonero’s “Accountable Algorithmic Futures“, Data and Society’s response to the Algorithmic Accountability Act of 2019…

The bill would empower the FTC to do “automated decision systems impact assessment” (ADSIA) of automated decision-making systems. The article argues that the devil is in the details and that the way the FTC goes about these assessments will determine their effectiveness.

The point of their article, which I found notable, is to assert the appropriate intellectual discipline for these impact assessments.

This is where social science comes in. To effectively implement the regulations, we believe that engagement with empirical inquiry is critical. But unlike the environmental model, we argue that social sciences should be the primary source of information about impact. Ethnographic methods are key to getting the kind of contextual detail, also known as “thick description,” necessary to understand these dimensions of effective regulation.

I want to flag this as weird.

There is an elision here between “the social sciences” and “ethnographic methods” here, as if there were no social sciences that were not ethnographic. And then “thick description” is implied to be the only source of contextual detail that might be relevant to impact assessments.

This is a familiar mantra, but it’s also plainly wrong. There’s many disciplines and methods within “the social sciences” that aren’t ethnographic, and many ways to get at contextual detail that does not involve “thick description”. There is a worthwhile and interesting intellectual question: what are the appropriate methods for algorithmic impact assessment. The authors of this piece assume an answer to that question without argument.

A few brief notes towards “Procuring Cybersecurity”

I’m shifting research focus a bit and wanted to jot down a few notes. The context for the shift is that I have the pleasure of organizing a roundtable discussion for NYU’s Center for Cybersecurity and Information Law Institute, working closely with Thomas Streinz of NYU’s Guarini Global Law and Tech.

The context for the workshop is the steady feed of news about global technology supply chains and how they are not just relevant to “cybersecurity”, but in some respects are constitutive of cyberinfrastructure and hence the field of its security.

I’m using “global technology supply chains” rather loosely here, but this includes:

  • Transborder personal data flows as used in e-commerce
  • Software- (and Infrastructure-)-as-a-Service being marketing internationally (including Google used abroad, for example)
  • Enterprise software import/export
  • Electronics manufacturing and distribution.

Many concerns about cybersecurity as a global phenomenon circulate around the imagined or actual supply chain. These are sometimes national security concerns that result in real policy, as when Australia recently banned Hauwei and ZTE from supplying 5G network equipment for fear that it would provide a vector of interference from the Chinese government.

But the nationalist framing is certainly not the whole story. I’ve heard anecdotally that after the Snowden revelations, Microsoft’s internally began to see the U.S. government as a cybersecurity “adversary“. Corporate tech vendors naturally don’t want to be known as being vectors for national surveillance, as this cuts down on their global market share.

Governments and corporations have different cybersecurity incentives and threat models. These models intersect and themselves create the dynamic cybersecurity field. For example, these Chinese government has viewed foreign software vendors as cybersecurity threats, and has responded by mandating source code disclosure. But as this is a vector of potential IP theft, foreign vendors have balked, seeing this mandate as a threat. (Ahmed and Weber, 2018).Complicating things further, a defensive “cybersecurity” measure can also serve the goal of protecting domestic technology innovation–which can be framed as providing a nationalist “cybersecurity” edge in the long run.

What, if anything, prevents a total cyberwar of all against all? One answer is trade agreements that level the playing field, or at least establish rules for the game. Another is open technology and standards, which provide an alternative field driven by the benefits of interoperability rather than proprietary interest and secrecy. Is it possible to capture any of this in accurate model or theory?

I love having the opportunity to explore these questions, as they are at the intersection of my empirical work on software supply chains (Benthall et al., 2016; Benthall, 2017) and also theoretical work on data economics in my dissertation. My hunch for some time has been that there’s a dearth of solid economics theory for the contemporary digital economy, and this is one way of getting at that.

References

Ahmed, S., & Weber, S. (2018). China’s long game in techno-nationalism. First Monday, 23(5). 

Benthall, S., Pinney, T., Herz, J. C., Plummer, K., Benthall, S., & Rostrup, S. (2016). An ecological approach to software supply chain risk management. In 15th Python in Science Conference.

Benthall, S. (2017, September). Assessing software supply chain risk using public data. In 2017 IEEE 28th Annual Software Technology Conference (STC) (pp. 1-5). IEEE.

Notes on O’Neil, Chapter 2, “Bomb Parts”

Continuing with O’Neil’s Weapons of Math Destruction on to Chapter 2, “Bomb Parts”. This is a popular book and these are quick chapters. But that’s no reason to underestimate them! This is some of the most lucid work I’ve read on algorithmic fairness.

This chapter talks about three kinds of “models” used in prediction and decision making, with three examples. O’Neil speak highly of the kinds of models used in baseball to predict the trajectory of hits and determine the optimal placement of people in the field. (Ok, I’m not so good at baseball terms). These are good, O’Neil says, because they are transparent, they are consistently adjusted with new data, and the goals are well defined.

O’Neil then very charmingly writes about the model she uses mentally to determine how to feed her family. She juggles a lot of variables: the preferences of her kids, the nutrition and cost of ingredients, and time. This is all hugely relatable–everybody does something like this. Her point, it seems, is that this form of “model” encodes a lot of opinions or “ideology” because it reflects her values.

O’Neil then discusses recidivism prediction, specifically the LSI-R (Level of Service Inventory–Revised) tool. It asks questions like “How many previous convictions have you had?” and uses that to predict likelihood of future prediction. The problem is that (a) this is sensitive to overpolicing in neighborhoods, which has little to do with actual recidivism rates (as opposed to rearrest rates), and (b) e.g. black neighborhoods are more likely to be overpoliced, meaning that the tool, which is not very good at predicting recidivism, has disparate impact. This is an example of what O’Neil calls an (eponymous) weapon of math destruction.(WMD)

She argues that the three qualities of a WMD are Scale, Opacity, and Damage. Which makes sense.

As I’ve said, I think this is a better take on algorithmic ethics than almost anything I’ve read on the subject before. Why?

First, it doesn’t use the word “algorithm” at all. That is huge, because 95% of the time the use of the word “algorithmic” in the technology-and-society literature is stupid. People use “algorithm” when they really mean “software”. Now, they use “AI System” to mean “a company”. It’s ridiculous.

O’Neil makes it clear in this chapter that what she’s talking about are different kinds of models. Models can be in ones head (as in her plan for feeding her family) or in a computer, and both kinds of models can be racist. That’s a helpful, sane view. It’s been the consensus of computer scientists, cognitive scientists, and AI types for decades.

The problem with WMDs, as opposed to other, better models, is that the WMDS models are unhinged from reality. O’Neil’s complaint is not with use of models, but rather that models are being used without being properly trained using sound sampling on data and statistics. WMDs are not artificially intelligences; they are artificial stupidities.

In more technical terms, it seems like the problem with WMDs is not that they don’t properly trade off predictive accuracy with fairness, as some computer science literature would suggest is necessary. It’s that the systems have high error rates in the first place because the training and calibration systems are poorly designed. What’s worse, this avoidable error is disparately distributed, causing more harm to some groups than others.

This is a wonderful and eye-opening account of unfairness in the models used by automated decision-making systems (note the language). Why? Because it shows that there is a connection between statistical bias, the kind of bias that creates distortions in a quantitative predictive process, and social bias, the kind of bias people worry about politically, which consistently uses the term in both ways. If there is statistical bias that is weighing against some social group, then that’s definitely, 100% a form of bias.

Importantly, this kind of bias–statistical bias–is not something that every model must have. Only badly made models have it. It’s something that can be mitigated using scientific rigor and sound design. If we see the problem the way O’Neil sees it, then we can see clearly how better science, applied more rigorously, is also good for social justice.

As a scientist and technologist, it’s been terribly discouraging in the past years to be so consistently confronted with a false dichotomy between sound engineering and justice. At last, here’s a book that clearly outlines how the opposite is the case!

“the politicization of the social” and “politics of identity” in Omi and Winant, Cha. 6

A confusing debate in my corner of the intellectual Internet is about (a) whether the progressive left has a coherent intellectual stance that can be articulated, (b) what to call this stance, (c) whether the right-wing critics of this stance have the intellectual credentials to refer to it and thereby land any kind of rhetorical punch. What may be true is that both “sides” reflect social movements more than they reflect coherent philosophies as such, and so trying to bridge between them intellectually is fruitless.

Happily, reading through Omi and Winant, which among other things outlines a history of what I think of as the progressive left, or the “social justice”, “identity politics” movement in the United States. They address this in their Chapter 6: “The Great Transformation”. They use “the Great Transformation” to refer to “racial upsurges” in the 1950’s and 1960’s.

They are, as far as I can tell, the only people who ever use “The Great Transformation” to refer to this period. I don’t think it is going to stick. They name it this because they see this period as a great victorious period for democracy in the United States. Omi and Winant refer to previous periods in the United States as “racial despotism”, meaning that the state was actively treating nonwhites as second class citizens and preventing them from engaging in democracy in a real way. “Racial democracy”, which would involve true integration across race lines, is an ideal future or political trajectory that was approached during the Great Transformation but not realized fully.

The story of the civil rights movements in the mid-20th century are textbook material and I won’t repeat Omi and Winant’s account, which is interesting for a lot of reasons. One reason why it is interesting is how explicitly influenced by Gramsci their analysis is. As the “despotic” elements of United States power structures fade, the racial order is maintained less by coercion and more by consent. A power disparity in social order maintained by consent is a hegemony, in Gramscian theory.

They explain the Great Transformation as being due to two factors. One was the decline of the ethnicity paradigm of race, which had perhaps naively assumed that racial conflicts could be resolved through assimilation and recognition of ethnic differences without addressing the politically entrenched mechanisms of racial stratification.

The other factor was the rise of new social movements characterized by, in alliance with second-wave feminism, the politicization of the social, whereby social identity and demographic categories were made part of the public political discourse, rather than something private. This is the birth of “politics of identity”, or “identity politics”, for short. These were the original social justice warriors. And they attained some real political victories.

The reason why these social movements are not exactly normalized today is that there was a conservative reaction to resist changes in the 70’s. The way Omi and Winant tell it, the “colorblind ideology” of the early 00’s was culmination of a kind of political truce between “racial despotism” and “racial democracy”–a “racial hegemony”. Gilman has called this “racial liberalism”.

So what does this mean for identity politics today? It means it has its roots in political activism which was once very radical. It really is influenced by Marxism, as these movements were. It means that its co-option by the right is not actually new, as “reverse racism” was one of the inventions of the groups that originally resisted the Civil Rights movement in the 70’s. What’s new is the crisis of hegemony, not the constituent political elements that were its polar extremes, which have been around for decades.

What it also means is that identity politics has been, from its start, a tool for political mobilization. It is not a philosophy of knowledge or about how to live the good life or a world view in a richer sense. It serves a particular instrumental purpose. Omi and Winant talk about the politics of identity is “attractive”, that it is a contagion. These are positive terms for them; they are impressed at how anti-racism spreads. These days I am often referred to Phillips’ report, “The Oxygen of Amplification”, which is about preventing the spread of extremist views by reducing the amount of reporting on them in ‘disgust’. It must be fair to point out that identity politics as a left-wing innovation were at one point an “extremist” view, and that proponents of that view do use media effectively to spread it. This is just how media-based organizing tactics work, now.

From social movements to business standards

Matt Levine has a recent piece discussing how discovering the history of sexual harassment complaints about a company’s leadership is becoming part of standard due diligence before an acquisition. Implicitly, the threat of liability, and presumably the costs of a public relations scandal, are material to the value of the company being acquired.

Perhaps relatedly, the National Venture Capital Association has added to its Model Legal Documents a slew of policies related to harassment and discrimination, codes of conduct, attracting and retaining diverse talent, and family friendly policies. Rumor has it that venture capitalists will now encourage companies they invest in to adopt these tested versions of the policies, much as an organization would adopt a tested and well-understood technical standard.

I have in various researcher roles studied social movements and political change, but these studies have left me with the conclusion that changes to culture are rarely self-propelled, but rather are often due to more fundamental changes in demographics or institutions. State legislation is very slow to move and limited in its range, and so often trails behind other amassing of power and will.

Corporate self-regulation, on the other hand, through standards, contracts, due diligence, and the like, seems to be quite adaptive. This is leading me to the conclusion that a best kept secret of cultural change is that some of the main drivers of it are actually deeply embedded in corporate law. Corporate law has the reputation of being a dry subject which sucks in recent law grads into soulless careers. But what if that wasn’t what corporate law was? What if corporate law was really where the action is?

In broader terms, the adaptivety of corporate policy to changing demographics and social needs perhaps explains the paradox of “progressive neoliberalism”, or the idea that the emerging professional business class seems to be socially liberal, whether or not it is fiscally conservative. Professional culture requires, due to antidiscrimination law and other policies, the compliance of its employees with a standard of ‘political correctness’. People can’t be hostile to each other in the workplace or else they will get fired, and they especially can’t be hostile to anybody on the basis of their being part of a protected category. This has been enshrined into law long ago. Part of the role of educational institutions is to teach students a coherent story about why these rules are what they are and how they are not just legally mandated, but morally compelling. So the professional class has an ideology of inclusivity because it must.

On “Racialization” (Omi and Winant, 2014)

Notes on Omi and Winant, 2014, Chapter 4, Section: “Racialization”.

Summary

Race is often seen as either an objective category, or an illusory one.

Viewed objectively, it is seen as a biological property, tied to phenotypic markers and possibly other genetic traits. It is viewed as an ‘essence’.
Omi and Winant argue that the concept of ‘mixed-race’ depends on this kind of essentialism, as it implies a kind of blending of essences. This is the view associated with “scientific” racism, most prevalent in the prewar era.

View as an illusion, race is seen as an ideological construct. An epiphenomenon of culture, class, or peoplehood. Formed as a kind of “false consciousness”, in the Marxist terminology. This view is associated with certain critics of affirmative action who argue that any racial classification is inherently racist.

Omi and Winant are critical of both perspectives, and argue for an understanding of race as socially real and grounded non-reducibly in phenomic markers but ultimately significant because of the social conflicts and interests constructed around those markers.

They define race as: “a concept that signifies and symbolizes signifiers and symbolizes social conflicts and interests by referring to different types of human bodies.”

The visual aspect of race is irreducible, and becomes significant when, for example, is becomes “understood as a manifestation of more profound differences that are situated within racially identified persons: intelligence, athletic ability, temperament, and sexuality, among other traits.” These “understandings”, which it must be said may be fallacious, “become the basis to justify or reinforce social differentiation.

This process of adding social significance to phenomic markers is, in O&W’s language, racialization, which they define as “the extension of racial meanings to a previously racially unclassified relationship, social practice, or group.” They argue that racialization happens at both macro and micro scales, ranging from the consolidation of the world-system through colonialization to incidents of racial profiling.

Race, then, is a concept that refer to different kinds of bodies by phenotype and the meanings and social practices ascribed to them. When racial concepts are circulated and accepted as ‘social reality’, racial difference is not dependent on visual difference alone, but take on a life of their own.

Omi and Winant therefore take a nuanced view of what it means for a category to be socially constructed, and it is a view that has concrete political implications. They consider the question, raised frequently, as to whether “we” can “get past” race, or go beyond it somehow. (Recall that this edition of the book was written during the Obama administration and is largely a critique of the idea, which seems silly now, that his election made the United States “post-racial”).

Omi and Winant see this framing as unrealistically utopian and based on extreme view that race is “illusory”. It poses race as a problem, a misconception of the past. A more effective position, they claim, would note that race is an element of social structure, not an irregularity in it. “We” cannot naively “get past it”, but also “we” do not need to accept the erroneous conclusion that race is a fixed biological given.

Comments

Omi and Winant’s argument here is mainly one about the ontology of social forms.
In my view, this question of social form ontology is one of the “hard problems”
remaining in philosophy, perhaps equivalent to if not more difficult than the hard problem of consciousness. So no wonder it is such a fraught issue.

The two poles of thinking about race that they present initially, the essentialist view and the epiphenomenal view, had their heyday in particular historical intellectual movements. Proponents of these positions are still popularly active today, though perhaps it’s fair to say that both extremes are now marginalized out of the intellectual mainstream. Despite nobody really understanding how social construction works, most educated people are probably willing to accept that race is socially constructed in one way or another.

It is striking, then, that Omi and Winant’s view of the mechanism of racialization, which involves the reading of ‘deeper meanings’ into phenomic traits, is essentially a throwback to the objective, essentializing viewpoint.
Perhaps there is a kind of cognitive bias, maybe representativeness bias or fundamental attribution bias, which is responsible for the cognitive errors that make racialization possible and persistent.

If so, then the social construction of race would be due as much to the limits of human cognition as to the circulation of concepts. That would explain the temptation to believe that we can ‘get past’ race, because we can always believe in the potential for a society in which people are smarter and are trained out of their basic biases. But Omi and Winant would argue that this is utopian. Perhaps the wisdom of sociology and social science in general is the conservative recognition of the widespread implications of human limitation. As the social expert, one can take the privileged position that notes that social structure is the result of pervasive cognitive error. That pervasive cognitive error is perhaps a more powerful force than the forces developing and propagating social expertise. Whether it is or is not may be the existential question for liberal democracy.

An unanswered question at this point is whether, if race were broadly understood as a function of social structure, it remains as forceful a structuring element as if it is understood as biological essentialism. It is certainly possible that, if understood as socially contingent, the structural power of race will steadily erode through such statistical processes as regression to the mean. In terms of physics, we can ask whether the current state of the human race(s) is at equilibrium, or heading towards an equilibrium, or diverging in a chaotic and path-dependent way. In any of these cases, there is possibly a role to be played by technical infrastructure. In other words, there are many very substantive and difficult social scientific questions at the root of the question of whether and how technical infrastructure plays a role in the social reproduction of race.

interesting article about business in China

I don’t know much about China, really, so I’m always fascinated to learn more.

This FT article, “Anbang arrests demonstrates hostility to business”, by Jamil Anderlini, provides some wonderful historical context to a story about the arrest of an insurance oligarch.

In ancient times, merchants were at the very bottom of the four official social classes, below warrior-scholars, farmers and artisans. Although some became very rich they were considered parasites in Chinese society.

Ever since the Han emperors established the state salt monopoly in the second century BCE (remnants of which remain to this day), large-scale business enterprises have been controlled by the state or completely reliant on the favour of the emperor and the bureaucrat class.

In the 20th century, the Communist emperor Mao Zedong effectively managed to stamp out all private enterprise for a while.

Until the party finally allowed “capitalists” to join its ranks in 2002, many of the business activities carried out by the resurgent merchant class were technically illegal.

China’s rich lists are populated by entrepreneurs operating in just a handful of industries — particularly real estate and the internet.

Tycoons like Mr Wu who emerge in state-dominated sectors are still exceedingly rare. They are almost always closely linked to one of the old revolutionary families exercising enormous power from the shadows.

Everything about this is interesting.

First, in Western scholarship we rarely give China credit for its history of bureaucracy in the absence of capitalism. In the well know Weberian account, bureaucracy is an institutional invention that provides regular rule of law so that capitalism can thrive. But China’s history is one that is statist “from ancient times”, but with effective bureaucracy from the beginning. A managerialist history, perhaps.

Which makes the second point so unusual: why, given this long history of bureaucratic rule, are Internet companies operating in a comparatively unregulated way? This seems like a massive concession of power, not unlike how (arguably) the government of the United States conceded a lot of power to Silicon Valley under the Obama administration.

The article dramatically foreshadows a potential power struggle between Xi Jinping’s consolidated state and the tech giant oligarchs:

Now that Chinese President Xi Jinping has abolished his own term limits, setting the stage for him to rule for life if he wants to, the system of state patronage and the punishment of independent oligarchs is likely to expand. Any company or billionaire who offends the emperor or his minions will be swiftly dealt with in the same way as Mr Wu.

There is one group of Chinese companies with charismatic — some would say arrogant — founders that enjoy immense economic power in China today. They would seem to be prime candidates if the assault on private enterprise is stepped up.

Internet giants Alibaba, Tencent and Baidu are not only hugely profitable, they control the data that is the lifeblood of the modern economy. That is why Alibaba founder Jack Ma has repeatedly said, including to the FT, that he would gladly hand his company over to the state if Beijing ever asked him to. Investors in BABA can only hope it never comes to that.

That is quite the expression of feudal fealty from Jack Ma. Truly, a totally different business culture from that of the United States.

Exit vs. Voice as Defecting vs. Cooperation as …

These dichotomies that are often thought of separately are actually the same.

Cooperation Defection
Voice (Hirschman) Exit (Hirschman)
Lifeworld (Habermas) System (Habermas)
Power (Arendt) Violence (Arendt)
Institutions Markets

Marcuse, de Beauvoir, and Badiou: reflections on three strategies

I have written in this blog about three different philosophers who articulated a vision of hope for a more free world, including in their account an understanding of the role of technology. I would like to compare these views because nuanced differences between them may be important.

First, let’s talk about Marcuse, a Frankfurt School thinker whose work was an effective expression of philosophical Marxism that catalyzed the New Left. Marcuse was, like other Frankfurt School thinkers, concerned about the role of technology in society. His proposed remedy was “the transcendent project“, which involves an attempt at advancing “the totality” through an understanding of its logic and action to transform it into something that is better, more free.

As I began to discuss here, there is a problem with this kind of Marxist aspiration for a transformation of all of society through philosophical understanding, which is this: the political and technical totality exists as it does in no small part to manage its own internal information flows. Information asymmetries and differentiation of control structures are a feature, not a bug. The convulsions caused by the Internet as it tears and repairs the social fabric have not created the conditions of unified enlightened understanding. Rather, they have exposed that given nearly boundless access to information, most people will ignore it and maintain, against all evidence to the contrary, the dignity of one who has a valid opinion.

The Internet makes a mockery of expertise, and makes no exception for the expertise necessary for the Marcusian “transcendental project”. Expertise may be replaced with the technological apparati of artificial intelligence and mass data collection, but the latter are a form of capital whose distribution is a part of the totality. If they are having their transcendent effect today, as the proponents of AI claim, this effect is in the hands of a very few. Their motivations are inscrutable. As they have their own opinions and courtiers, writing for them is futile. They are, properly speaking, a great uncertainty that shows that centralized control does not close down all options. It may be that the next defining moment in history is set by the decision of how Jeff Bezos decides to spend his wealth, and that is his decision alone. For “our” purposes–yours, my reader, and mine–this arbitrariness of power must be seen as part of the totality to be transcended, if that is possible.

It probably isn’t. And if it Really isn’t, that may be the best argument for something like the postmodern breakdown of all epistemes. There are at least two strands of postmodern thought coming from the denial of traditional knowledge and university structure. The first is the phenomenological privileging of subjective experience. This approach has the advantage of never being embarrassed by the fact that the Internet is constantly exposing us as fools. Rather, it allows us to narcissistically and uncritically indulge in whatever bubble we find ourselves in. The alternative approach is to explicitly theorize about ones finitude and the radical implications of it, to embrace a kind of realist skepticism or at least acknowledgement of the limitations of the human condition.

It’s this latter approach which was taken up by the existentialists in the mid-20th century. In particular, I keep returning to de Beauvoir as a hopeful voice that recognizes a role for science that is not totalizing, but nevertheless liberatory. De Beauvoir does not take aim, like Marcuse and the Frankfurt School, at societal transformation. Her concern is with individual transformation, which is, given the radical uncertainty of society, a far more tractable problem. Individual ethics are based in local effects, not grand political outcomes. The desirable local effects are personal liberation and liberation of those one comes in contact with. Science, and other activities, is a way of opening new possibilities, not limited to what is instrumental for control.

Such a view of incremental, local, individual empowerment and goodness seems naive in the face of pessimistic views of society’s corruptedness. Whether these be economic or sociological theories of how inequality and oppression are locked into society, and however emotionally compelling and widespread they may be in social media, it is necessary by our previous argument to remember that these views are always mere ideology, not scientific fact, because an accurate totalizing view of society is impossible given real constraints on information flow and use. Totalizing ideologies that are not rigorous in their acceptance of basic realistic points are a symptom of more complex social structure (i.e. the distribution of capitals, the reproduction of many habiti) not a definition of it.

It is consistent for a scientific attitude to deflate political ideology because this deflation is an opening of possibility against both utopian and dystopian trajectories. What’s missing is a scientific proof of this very point, comparable to a Halting Problem or Incompleteness Theorem, but for social understanding.

A last comment, comparing Badiou to de Beauvoir and Marcuse. Badiou’s theory of the Event as the moment that may be seized to effect a transformation is perhaps a synthesis of existentialist and Marxian philosophies. Badiou is still concerned with transcendence, i.e. the moment when, given one assumed structure to life or reality or psychology, one discovers an opening into a renewed life with possibilities that the old model did not allow. But (at least as far as I have read him, which is not enough) he sees the Event as something that comes from without. It cannot be predicted or anticipate within the system but is instead a kind of grace. Without breaking explicitly from professional secularism, Badiou’s work suggests that we must have faith in something outside our understanding to provide an opportunity for transcendence. This is opposed to the more muscular theories described above: Marcuse’s theory of transcendent political activism and de Beauvoir’s active individual projects are not as patient.

I am still young and strong and so prefer the existentialist position on these matters. I am politically engaged to some extent and so, as an extension of my projects of individual freedom, am in search of opportunities for political transcendence as well–a kind of Marcuse light, as politics like science is a field of contest that is reproduced as its games are played and this is its structure. But life has taught me again and again to appreciate Badiou’s point as well, which is the appreciation of the unforeseen opportunity, the scientific and political anomaly.

What does this reflection conclude?

First, it acknowledges the situatedness and fragility of expertise, which deflates grand hopes for transcendent political projects. Pessimistic ideologies that characterize the totality as beyond redemption are false; indeed it is characteristic of the totality that it is incomprehensible. This is a realistic view, and transcendence must take it seriously.

Second, it acknowledges the validity of more localized liberatory projects despite the first point.

Third, it acknowledges that the unexpected event is a feature of the totality to be embraced, contrary to pessimistic ideologies to the contrary. The latter, far from encouraging transcendence, are blinders that prevent the recognition of events.

Because realism requires that we not abandon core logical principles despite our empirical uncertainty, you may permit one more deduction. To the extent that actors in society pursue the de Beauvoiran strategy of engaging in local liberatory projects that affect others, the probability of a Badiousian event in the life of another increases. Solipsism is false, and so (to put it tritely) “random acts of kindness” do have their effect on the totality, in aggregate. In fact, there may be no more radical political agenda than this opening up of spaces of local freedom, which shrugs off the depression of pessimistic ideology and suppression of technical control. Which is not a new view at all. What is perhaps surprising is how easy it may be.