Digifesto

Tag: morality

naturalized ethics and natural law

One thing that’s become clear to me lately is that I now believe that ethics can be naturalized. I also believe that there is in fact a form of ‘natural law’. By this I mean that that there are rights and values that are inherent to human nature. Real legal systems can either lie up to natural law, or not.

This is not the only position that it’s possible to take on these topics.

One different position, that I do not have, is that ethics depends on the supernatural. I bring this up because religion is once again very politically salient in the United States. Abrahamic religions ground ethics and morality in a covenant between humans and a supernatural God. Divine power authorizes the ethical code. In some cases this is explicitly stated law, in others it is a set of principles. Beyond divine articulation, this position maintains that ethics are supernaturally enforced through reward and punishment. I don’t think this is how things work.

Another position I don’t have is that there is that ethics are opinion or cultural construction, full stop. Certainly there’s a wide diversity of opinions on ethics and cultural attitudes. Legal systems vary from place to place. This diversity is sometimes used as evidence that there aren’t truths about ethics or law to be had. But that is, taken alone, a silly argument. Lots of people and legal systems are simply wrong. Moreover, moral and ethical truths can take contingency and variety into account, and they probably should. It can be true that laws should be well-adapted to some otherwise arbitrary social expectations or material conditions. And so on.

There has historically been hemming and hawing about the fact/value dichotomy. If there’s no supernatural guarantor of ethics, is the natural world sufficient to produce values beyond our animal passions? This increasingly feels like an argument from a previous century. Adequate solutions to this problem have been offered by philosophers over time. They tend to involve some form of rational or reflective process, and aggregation over the needs and opinions of people in heterogeneous circumstances. Habermas comes to mind as a one of the synthesizers of a new definition of naturalized law and ethics.

For some reason, I’ve encountered so much resistance to this form of ethical or moral realism over the years. But looking back on it, I can’t recall a convincing argument for it. I can recall many claims that the idea of ethical and moral truth are somehow politically dangerous, but that this not the same thing.

There is something teleological about most viable definitions of naturalized ethics and natural law. They are would would hypothetically be decided on by interlocutors in an idealized but not yet realized circumstance. A corollary to my position is that ethical and moral facts exist, but many have not yet been discovered. A scientific process is needed to find them. This process is necessarily a social scientific process, since ethical and moral truths are truths about social systems and how they work.

It would be very fortunate, I think, if some academic department, discipline, or research institution were to take up my position. At present, we seem to have a few different political positions available to us in the United States:

  • A conservative rejection of the university of being insufficiently moral because of its abandonment of God
  • A postmodern rejection of ethical and moral truths that relativizes everything
  • A positivist rejection of normativity as the object of social science because of the fact/value dichotomy
  • Politicized disciplines that presume a political agenda and then perform research aligned with that agenda
  • Explicitly normative disciplines that are discursive and humanistic but not inclined towards rigorous analysis of the salient natural facts

None of these is conducive to a scientific study of what ethics and morals should be. There are exceptions, of course, and many brilliant people in many corners who make great contributions towards this goal. But they seem scattered at the margins of the various disciplines, rather than consolidated into a thriving body of intellect. At a moment where we see profound improvements (yes, improvements!) in our capacity for reasoning and scientific exploration, why hasn’t something like this emerged? It would be an improvement over the status quo.

A philosophical puzzle: morality with complex rationality

There’s a recurring philosophical puzzle that keeps coming up as one drills into the foundational issues at the heart of technology policy. The more complete articulation of it that I know of is in a draft I’ve written with Jake Goldenfein whose publication was COVID delayed. But here is an abbreviated version of the philosophical problem, distilled perhaps from the tech policy context.

For some reason it all comes back to Kant. The categorical imperative has two versions that are supposed to imply each other:

  • Follow rules that would be agreed on as universal by rational beings.
  • Treat others as ends and not means.

This is elegant and worked quite well while the definitions of ‘rationality’ in play were simple enough that Man could stand at the top of the hierarchy.

Kant is outdated now of course but we can see the influence of this theory in Rawls’s account of liberal ethics (the ‘veil of ignorance’ being a proxy for the reasoning being who has transcended their empirical body), in Habermas’s account of democracy (communicative rationality involving the setting aside of individual interests), and so on. Social contract theories are more or less along these lines. This paradigm is still more or less the gold standard.

There’s a few serious challenges to this moral paradigm. They both relate to how the original model of rationality that it is based on is perhaps naive or so rarefied to be unrealistic. What happens if you deny that people are rational in any disinterested sense, or allow for different levels of rationality? It all breaks down.

On the one hand, there’s various forms of egoism. Sloterdijk argues that Nietzsche stood out partly because he argued for an ethics of self-advancement, which rejected deontological duty. Scandalous. The contemporary equivalent is the reputation of Ayn Rand and those inspired by her. The general idea here is the rejection of social contract. This is frustrating to those who see the social contract as serious and valuable. A key feature of this view is that reason is not, as it is for Kant, disinterested. Rather, it is self-interested. It’s instrumental reason with attendant Humean passions to steer it. The passions need not be too intellectually refined. Romanticism, blah blah.

On the other hand, the 20th century discovers scientifically the idea of bounded rationality. Herbert Simon is the pivotal figure here. Individuals, being bounded, form organizations to transcend their limits. Simon is the grand theorist of managerialism. As far as I know, Simon’s theories are amoral, strictly about the execution of instrumental reason.

Nevertheless, Simon poses a challenge to the universalist paradigm because he reveals the inadequacy of individual humans to self-determine anything of significance. It’s humbling; it also threatens the anthropocentrism that provided the grounds for humanity’s mutual self-respect.

So where does one go from here?

It’s a tough question. Some spitballing:

  • One option is to relocate the philosophical subject from the armchair (Kant) to the public sphere (Habermas) into a new kind of institution that was better equipped to support their cogitation about norms. A public sphere equipped with Bloomberg terminals? But then who provides the terminals? And what about actually existing disparities of access?
    • One implication of this option, following Habermas, is that the communications within it, which would have to include data collection and the application of machine learning, would be disciplined in ways that would prevent defections.
    • Another implication, which is the most difficult one, is that the institution that supports this kind of reasoning would have to acknowledge different roles. These roles would constitute each other relationally–there would need to be a division of labor. But those roles would need to each be able to legitimize their participation on the whole and trust the overall process. This seems most difficult to theorize let alone execute.
  • A different option, sort of the unfinished Nietzschean project, is to develop the individual’s choice to defect into something more magnanimous. Simone de Beauvoir’s widely underrated Ethics of Ambiguity is perhaps the best accomplishment along these lines. The individual, once they overcome their own solipsism and consider their true self-interests at an existential level, come to understand how the success of their projects depends on society because society will outlive them. In a way, this point echoes Simon’s in that it begins from an acknowledgment of human finitude. It reasons from there to a theory of how finite human projects can become infinite (achieving the goal of immortality to the one who initiates them) by being sufficiently prosocial.

Either of these approaches might be superior to “liberalism”, which arguably is stuck in the first paradigm (though I suppose there are many liberal theorists who would defend their position). As a thought experiment, I wonder what public policies motivated by either of these positions would look like.

Bostrom and Habermas: technical and political moralities, and the God’s eye view

An intriguing chapter that follows naturally from Nick Bostrom’s core argument is his discussion of machine ethics writ large. He asks: suppose one could install into an omnipotent machine ethical principles, trusting it with the future of humanity. What principles should we install?

What Bostrom accomplishes by positing his Superintelligence (which begins with something simply smarter than humans, and evolves over the course of the book into something that takes over the galaxy) is a return to what has been called “the God’s eye view”. Philosophers once attempted to define truth and morality according to perspective of an omnipotent–often both transcendent and immanent–god. Through the scope of his work, Bostrom has recovered some of these old themes. He does this not only through his discussion of Superintelligence (and positing its existence in other solar systems already) but also through his simulation arguments.

The way I see it, one thing I am doing by challenging the idea of an intelligence explosion and its resulting in a superintelligent singleton is problematizing this recovery of the God’s Eye view. If your future world is governed by many sovereign intelligent systems instead of just one, then ethics are something that have to emerge from political reality. There is something irreducibly difficult about interacting with other intelligences and it’s from this difficulty that we get values, not the other way around. This sort of thinking is much more like Habermas’s mature ethical philosophy.

I’ve written about how to apply Habermas to the design of networked publics that mediate political interactions between citizens. What I built and offer as toy example in that paper, @TheTweetserve, is simplistic but intended just as a proof of concept.

As I continue to read Bostrom, I expect a convergence on principles. “Coherent extrapolated volition” sounds a lot like a democratic governance structure with elected experts at first pass. The question of how to design a governance structure or institution that leverages artificial intelligence appropriately while legitimately serving its users motivates my dissertation research. My research so far has only scratched the surface of this problem.