Digifesto

reflexive control

A theory I wish I had more time to study in depth these days is the Soviet field of reflexive control (see for example this paper by Timothy Thomas on the subject).

Reflexive control is defined as a means of conveying to a partner or an opponent specially prepared information to incline him to voluntarily make the predetermined decision desired by the initiator of the action. Even though the theory was developed long ago in Russia, it is still undergoing further refinement. Recent proof of this is the development in February 2001, of a new Russian journal known as Reflexive Processes and Control. The journal is not simply the product of a group of scientists but, as the editorial council suggests, the product of some of Russia’s leading national security institutes, and boasts a few foreign members as well.

While the paper describes the theory in broad strokes, I’m interested in how one would formalize and operationalize reflexive control. My intuitions thus far are like this: traditional control theory assumes that the controlled system is inanimate or at least not autonomous. The controlled system is steered, often dynamically, to some optimal state. But in reflexive control, the assumption is that the controlled system is autonomous and has a decision-making process or intelligence. Therefore reflexive control is a theory of influence, perhaps deception. Going beyond mere propaganda, it seems like reflexive control can be highly reactive, taking into account the reaction time of other agents in the field.

There are many examples, from a Russian perspective, of the use of reflexive control theory during conflicts. One of the most recent and memorable was the bombing of the market square in Sarejevo in 1995. Within minutes of the bombing, CNN and other news outlets were reporting that a Serbian mortar attack had killed many innocent people in the square. Later, crater analysis of the shells that impacted in the square, along with other supporting evidence, indicated that the incident did not happen as originally reported. This evidence also threw into doubt the identities of the perpetrators of the attack. One individual close to the investigation, Russian Colonel Andrei Demurenko, Chief of Staff of Sector Sarejevo at the time, stated, “I am not saying the Serbs didn’t commit this atrocity. I am saying that it didn’t happen the way it was originally reported.” A US and Canadian officer soon backed this position. Demurenko believed that the incident was an excellent example of reflexive control, in that the incident was made to look like it had happened in a certain way to confuse decision-makers.

Thomas’s article points out that the notable expert in reflexive control in the United States is V. A. Lefebvre, a Soviet ex-pat and mathematical psychologist at UC Irvine. He is listed on a faculty listing but doesn’t seem to have a personal home page. His wikipedia page says that reflexive theory is like the Soviet alternative to game theory. That makes sense. Reflexive theory has been used by Lefebvre to articulate a mathematical ethics, which is surely relevant to questions of machine ethics today.

Beyond its fascinating relevance to many open research questions in my field, it is interesting to see in Thomas’s article how “reflexive control” seems to capture so much of what is considered “cybersecurity” today.

One of the most complex ways to influence a state’s information resources is by use of reflexive control measures against the state’s decision-making processes. This aim is best accomplished by formulating certain information or disinformation designed to affect a specific information resource best. In this context an information resource is defined as:

  • information and transmitters of information, to include the method or technology of obtaining, conveying, gathering, accumulating, processing, storing, and exploiting that information;
  • infrastructure, including information centers, means for automating information processes, switchboard communications, and data
    transfer networks;
  • programming and mathematical means for managing information;
  • administrative and organizational bodies that manage information processes, scientific personnel, creators of data bases and knowledge, as well as personnel who service the means of informatizatsiya [informatization].

Unlike many people, I don’t think “cybersecurity” is very hard to define at all. The prefix “cyber-” clearly refers to the information-based control structures of a system, and “security” is just the assurance of something against threats. So we might consider “reflexive control” to be essentially equivalent to “cybersecurity”, except with an emphasis on the offensive rather than defensive aspects of cybernetic control.

I have yet to find something describing the mathematical specifics of the theory. I’d love to find something and see how it compares to other research in similar fields. It would be fascinating to see where Soviet and Anglophone research on these topics is convergent, and where it diverges.

For “Comments on Haraway”, see my “Philosophy of Computational Social Science”

One of my most frequently visited blog posts is titled “Comments on Haraway: Situated knowledge, bias, and code”.  I have decided to password protect it.

If you are looking for a reference with the most important ideas from that blog post, I refer you to my paper, “Philosophy of Computational Social Science”. In particular, its section on “situated epistemology” discusses how I think computational social scientists should think about feminist epistemology.

I have decided to hide the original post for a number of reasons.

  • I wrote it pointedly. I think all the points have now been made better elsewhere, either by me or by the greater political zeitgeist.
  • Because it was written pointedly (even a little trollishly), I am worried that it may be easy to misread my intention in writing it. I’m trying to clean up my act :)
  • I don’t know who keeps reading it, though it seems to consistently get around thirty or more hits a week. Who are these people? They won’t tell me! I think it matters who is reading it.

I’m willing to share the password with anybody who contacts me about it.

Protected: I study privacy now

This content is password protected. To view it please enter your password below:

moved BigBang core repository to DATACTIVE organization

I made a small change this evening which I feel really, really good about.

I transferred the BigBang project from my personal GitHub account to the datactive organization.

I’m very grateful for DATACTIVE‘s interest in BigBang and am excited to turn over the project infrastructure to their stewardship.

becoming a #seriousacademic

I’ve decided to make a small change to my on-line identity.

For some time now, my Twitter account has been listed under a pseudonym, “Gnaeus Rafinesque”, and has had a picture of a cat. Today I’m changing it to my full name (“Sebastian Benthall”) and a picture of my face.

Gnaeus Rafinesque

Serious academic

I chose to use a pseudonym on Twitter for a number of reasons. One reason was that I was interested in participant observation in an Internet subculture, Weird Twitter, that generally didn’t use real names because most of their activity on Twitter was very silly.

But another reason was because I was afraid of being taken seriously myself. As a student, even a graduate student, I felt it was my job to experiment, fail, be silly, and test the limits of the media I was working (and playing) within. I learned a lot from this process.

Because I often would not intend to be taken seriously on Twitter, I was reluctant to have my tweets associated with my real name. I deliberately did not try to sever all ties between my Twitter account and my “real” identity, which is reflected elsewhere on the Internet (LinkedIn, GitHub, etc.) because…well, it would have been a lot of futile work. But I think using a pseudonym and a cat picture succeeded in signalling that I wasn’t putting the full weight of my identity, with the accountability entailed by that, into my tweets.

I’m now entering a different phase of my career. Probably the most significant marker of that phase change is that I am now working as a cybersecurity professional in addition to being a graduate student. I’m back in the working world and so in a sense back to reality.

Another marker is that I’ve realized that I’ve got serious things worth saying and paying attention to, and that projecting an inconsequential, silly attitude on Twitter was undermining my ability to say those things.

It’s a little scary shifting to my real name and face on Twitter. I’m likely to censor myself much more now. Perhaps that’s as it should be.

I wonder what other platforms are out there in which I could be more ridiculous.

The FTC and pragmatism; Hoofnagle and Holmes

I’ve started working my way through Chris Hoofnagle’s Federal Trade Commission Privacy Law and Policy. Where I’m situated at the I School, there’s a lot of representation and discussion of the FTC in part because of Hoofnagle’s presence there. I find all this tremendously interesting but a bit difficult to get a grip on, as I have only peripheral experiences of actually existing governance. Instead I’m looking at things with a technical background and what can probably be described as overdeveloped political theory baggage.

So a clearly written and knowledgeable account of the history and contemporary practice of the FTC is exactly what I need to read, I figure.

With the poor judgment of commenting on the book having just cracked it open, I can say that the book reads so far as, not surprisingly, a favorable account of the FTC and its role in privacy law. In broad strokes, I’d say Hoofnagle’s narrative is that while the FTC started out as a compromise between politicians with many different positions on trade regulation, and while its had at times “mediocre” leadership, now the FTC is run by selfless, competent experts with the appropriate balance of economic savvy and empathy for consumers.

I can’t say I have any reason to disagree. I’m not reading for either a critique or an endorsement of the agency. I’m reading with my own idiosyncratic interests in mind: algorithmic law and pragmatist legal theory, and the relationship between intellectual property and antitrust. I’m also learning (through reading) how involved the FTC has been in regulating advertising, which endears me to the adjacency because I find most advertising annoying.

Missing as I am any substantial knowledge of 20th century legal history, I’m intrigued by resonances between Hoofnagle’s account of the FTC and Oliver Wendell Holmes Jr.’s “The Path of the Law“, which I mentioned earlier. Apparently there’s some tension around the FTC as some critics would like to limit its powers by holding it more narrowly accountable to common law, as oppose to (if I’m getting this right) a more broadly scoped administrative law that, among other things, allows it to employ skilled economist and technologists. As somebody who has been intellectually very informed by American pragmatism, I’m pleased to notice that Holmes himself would have probably approved of the current state of the FTC:

At present, in very many cases, if we want to know why a rule of law has taken its particular shape, and more or less if we want to know why it exists at all, we go to tradition. We follow it into the Year Books, and perhaps beyond them to the customs of the Salian Franks, and somewhere in the past, in the German forests, in the needs of Norman kings, in the assumptions of a dominant class, in the absence of generalized ideas, we find out the practical motive for what now best is justified by the mere fact of its acceptance and that men are accustomed to it. The rational study of law is still to a large extent the study of history. History must be a part of the study, because without it we cannot know the precise scope of rules which it is our business to know. It is a part of the rational study, because it is the first step toward an enlightened scepticism, that is, towards a deliberate reconsideration of the worth of those rules. When you get the dragon out of his cave on to the plain and in the daylight, you can count his teeth and claws, and see just what is his strength. But to get him out is only the first step. The next is either to kill him, or to tame him and make him a useful animal. For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics. It is revolting to have no better reason for a rule of law than that so it was laid down in the time of Henry IV. It is still more revolting if the grounds upon which it was laid down have vanished long since, and the rule simply persists from blind imitation of the past. (Holmes, 1897)

These are strong words from a Supreme Court justice about the limitations of common law! It’s also a wholehearted endorsement of quantified science as the basis for legal rules. Perhaps what Holmes would have preferred is a world in which statistics and economics themselves became part of the logic of law. However, he goes to pains to point out how often legal judgment itself does not depend on logic so much as the unconscious biases of judges and juries, especially with respect to questions of “social advantage”:

I think that the judges themselves have failed adequately to recognize their duty of weighing considerations of social advantage. The duty is inevitable, and the result of the often proclaimed judicial aversion to deal with such considerations is simply to leave the very ground and foundation of judgments inarticulate, and often unconscious, as I have said. When socialism first began to be talked about, the comfortable classes of the community were a good deal frightened. I suspect that this fear has influenced judicial action both here and in England, yet it is certain that it is not a conscious factor in the decisions to which I refer. I think that something similar has led people who no longer hope to control the legislatures to look to the courts as expounders of the constitutions, and that in some courts new principles have been discovered outside the bodies of those instruments, which may be generalized into acceptance of the economic doctrines which prevailed about fifty years ago, and a wholesale prohibition of what a tribunal of lawyers does not think about right. I cannot but believe that if the training of lawyers led them habitually to consider more definitely and explicitly the social advantage on which the rule they lay down must be justified, they sometimes would hesitate where now they are confident, and see that really they were taking sides upon debatable and often burning questions.

What I find interesting about this essay is that it somehow endorses both the use of economics and statistics in advancing legal thinking and also endorses what has become critical legal theory, with its specific consciousness of the role of social power relations in law. So often in contemporary academic discourse, especially when it comes to discussion of regulation technology businesses, these approaches to law are considered opposed. Perhaps it’s appropriate to call a more politically centered position, if there were one today, a pragmatist position.

Perhaps quixotically, I’m very interested in the limits of these arguments and their foundation in legal scholarship because I’m wondering to what extent computational logic can become a first class legal logic. Holmes’s essay is very concerned with the limitations of legal logic:

The fallacy to which I refer is the notion that the only force at work in the development of the law is logic. In the broadest sense, indeed, that notion would be true. The postulate on which we think about the universe is that there is a fixed quantitative relation between every phenomenon and its antecedents and consequents. If there is such a thing as a phenomenon without these fixed quantitative relations, it is a miracle. It is outside the law of cause and effect, and as such transcends our power of thought, or at least is something to or from which we cannot reason. The condition of our thinking about the universe is that it is capable of being thought about rationally, or, in other words, that every part of it is effect and cause in the same sense in which those parts are with which we are most familiar. So in the broadest sense it is true that the law is a logical development, like everything else. The danger of which I speak is not the admission that the principles governing other phenomena also govern the law, but the notion that a given system, ours, for instance, can be worked out like mathematics from some general axioms of conduct. This is the natural error of the schools, but it is not confined to them. I once heard a very eminent judge say that he never let a decision go until he was absolutely sure that it was right. So judicial dissent often is blamed, as if it meant simply that one side or the other were not doing their sums right, and if they would take more trouble, agreement inevitably would come.

This mode of thinking is entirely natural. The training of lawyers is a training in logic. The processes of analogy, discrimination, and deduction are those in which they are most at home. The language of judicial decision is mainly the language of logic. And the logical method and form flatter that longing for certainty and for repose which is in every human mind. But certainty generally is illusion, and repose is not the destiny of man. Behind the logical form lies a judgment as to the relative worth and importance of competing legislative grounds, often an inarticulate and unconscious judgment, it is true, and yet the very root and nerve of the whole proceeding. You can give any conclusion a logical form. You always can imply a condition in a contract. But why do you imply it? It is because of some belief as to the practice of the community or of a class, or because of some opinion as to policy, or, in short, because of some attitude of yours upon a matter not capable of exact quantitative measurement, and therefore not capable of founding exact logical conclusions. Such matters really are battle grounds where the means do not exist for the determinations that shall be good for all time, and where the decision can do no more than embody the preference of a given body in a given time and place. We do not realize how large a part of our law is open to reconsideration upon a slight change in the habit of the public mind. No concrete proposition is self evident, no matter how ready we may be to accept it, not even Mr. Herbert Spencer’s “Every man has a right to do what he wills, provided he interferes not with a like right on the part of his neighbors.”

For Holmes, nature can be understood through a mathematized physics and is in this sense logical. But the law itself is not logical in the narrow sense of providing certainty about concrete propositions and the legal interpretation of events.

I wonder whether the development of more flexible probabilistic logics, such as those that inform contemporary machine learning techniques, would have for Holmes adequately bridged the gap between the logic of nature and the ambiguity of law. These probabilistic logics are designed to allow for precise quantification of uncertainty and ambiguity.

This is not a purely academic question. I’m thinking concretely about applications to regulation. Some of this has already been implemented. I’m thinking about Datta, Tschantz, and Datta’s “Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination” (pdf). I know several other discrimination auditing tools have been developed by computer science researchers. What is the legal status of these tools? Could they or should they be implemented as a scalable or real-time autonomous system?

I was talking to an engineer friend the other day and he was telling me that internally to Google, there’s a team responsible for building the automated system that tests all of its other automated systems to make sure that it is adherence to its own internal privacy standards. This was a comforting thing to hear and not a surprise, as I get the sense from conversations I’ve had with Googler’s that they are in general a very ethically conscientious company. What’s distressing to me is that Google may have more powerful techniques available for self-monitoring than the government has for regulation. This is because (I think…again my knowledge of these matters is actually quite limited) at Google they know when a well-engineered computing system is going to perform better than a team of clerks, and so developing this sort of system is considered worthy of investment. It will be internally trusted as much as any other internal expertise. Whereas in the court system, institutional inertia and dependency on discursive law mean that at best this sort of system can be brought in as an expensive and not entirely trusted external source.

What I’d like to figure out is to what extent agency law in particular is flexible enough to be extended to algorithmic law.

algorithmic law and pragmatist legal theory: Oliver Wendell Holmes Jr. “The Path of the Law”

Several months ago I was taken by the idea that in the future (and maybe depending on how you think about it, already in the present) laws should be written as computer algorithms. While the idea that “code is law” and that technology regulates is by no means original, what I thought perhaps provocative is the positive case for the (re-)implementation of the fundamental laws of the city or state in software code.

The argument went roughly like this:

  • Effective law must control a complex society
  • Effective control requires social and political prediciton.
  • Unassisted humans are not good at social and political prediction. For this conclusion I drew heavily on Philip Tetlock’s work in Expert Political Judgment.
  • Therefore laws, in order to keep pace with the complexity of society, should be implemented as technical systems capable of bringing data and machine learning to bear on social control.

Science fiction is full of both dystopias and utopias in which society is literally controlled by a giant, intelligent machine. Avoiding either extreme, I just want to make the modest point that there may be scalability problems with law and regulation based on discourse in natural language. To some extent the failure of the state to provide sophisticated, personalized regulation in society has created myriad opportunities for businesses to fill these roles. Now there’s anxiety about the relationship between these businesses and the state as they compete for social regulation. To the extent that businesses are less legitimate rulers of society than the state, it seems a practical, technical necessity that the state adopt the same efficient technologies for regulation that businesses have. To do otherwise is to become obsolete.

There are lots of reasons to object to this position. I’m interested in hearing yours and hope you will comment on this and future blog posts or otherwise contact me with your considered thoughts on the matter. To me the strongest objection is that the whole point of the law is that it is based on precedent, and so any claim about the future trajectory of the law has to be based on past thinking about the law. Since I am not a lawyer and I know precious little about the law, you shouldn’t listen to my argument because I don’t know what I’m talking about. Q.E.D.

My counterargument to this is that there’s lots of academics who opine about things they don’t have particular expertise in. One way to get away with this is by deferring to somebody else who has credibility in field of interest. This is just one of several reasons why I’ve been reading “The Path of the Law“, a classic essay about pragmatist legal theory written by Supreme Court Justice Oliver Wendell Holmes Jr. in 1897.

One of the key points of this essay is that it is a mistake to consider the study of law the study of morality per se. Rather, the study of law is the attempt to predict the decisions that courts will make in the future, based on the decisions courts will make in the past. What courts actually decide is based in part of legal precedent but also on the unconsciously inclinations of judges and juries. In ambiguous cases, different legal framings of the same facts will be in competition, and the judgment will give weight to one interpretation or another. Perhaps the judge will attempt to reconcile these differences into a single, logically consistent code.

I’d like to take up the arguments of this essay again in later blog posts, but for now I want to focus on the concept of legal study as prediction. I think this demands focus because while Holmes, like most American pragmatists, had a thorough and nuanced understanding of what prediction is, our mathematical understanding of prediction has come a long way since 1897. Indeed, it is a direct consequence of these formalizations and implementations of predictive systems that we today see so much tacit social regulation performed by algorithms. We know now that effective prediction depends on access to data and the computational power to process it according to well-known algorithms. These algorithms can optimize themselves to such a degree that their specific operations are seemingly beyond the comprehension of the people affected by them. Some lawyers have argued that this complexity should not be allowed to exist.

What I am pointing to is a fundamental tension between the requirement that practitioners of the law be able to predict legal outcomes, and the fact that the logic of the most powerful predictive engines today is written in software code not words. This is because of physical properties of computation and prediction that are not likely to ever change. And since a powerful predictive engine can just as easily use its power to be strategically unpredictable, this presents an existential challenge to the law. It may simply be impossible for lawyers acting as human lawyers have for hundreds of years to effectively predict and therefor regulate powerful computational systems.

One could argue that this means that such powerful computational systems should simply be outlawed. Indeed this is the thrust of certain lawyers. But if we believe that these systems are not going to go away, perhaps because they won’t allow us to regulate them out of existence, then our only viable alternative to suffering under their lawless control is to develop a competing system of computational legalism with the legitimacy of the state.

second-order cybernetics

The mathematical foundations of modern information technology are:

  • The logic of computation and complexity, developed by Turing, Church, and others. These mathematics specify the nature and limits of the algorithm.
  • The mathematics of probability and, by extension, information theory. These specify the conditions and limitations of inference from evidence, and the conditions and limits of communication.

Since the discovery of these mathematical truths and their myriad application, there have been those that have recognized that these truths apply both to physical objects, such as natural life and artificial technology, and also to lived experience, mental concepts, and social life. Humanity and nature obey the same discoverable, mathematical logic. This allowed for a vision of a unified science of communication and control: cybernetics.

There have been many intellectual resistance to these facts. One of the most cogent is Understanding Computers and Cognition, by Terry Winograd and Fernando Flores. Terry Winograd is the AI professor who advised the founders of Google. His credentials are beyond question. And so the fact that he coauthored a critique of “rationalist” artificial intelligence with Fernando Flores, Chilean entrepreneur, politician, and philosophy PhD , is significant. In this book, the two authors base their critique of AI on the work of Humberto Maturana, a second-order cyberneticist who believed that life’s organization and phenomenology could be explained by a resonance between organism and environment, structural coupling. Theories of artificial intelligence are incomplete when not embedded in a more comprehensive theory of the logic of life.

I’ve begun studying this logic, which was laid out by Francisco Varela in 1979. Notably, like the other cybernetic logics, it is an account of both physical and phenomenological aspects of life. Significantly Varela claims that his work is a foundation for an observer-inclusive science, which addresses some of the paradoxes of the physicist’s conception of the universe and humanity’s place in it.
y hunch is that these principles can be applied to social scientific phenomena as well, as organizations are just organisms bigger than us. This is a rather strong claim and difficult to test. However, it seems to me after years of study the necessary conclusion of available theory. It also seems consistent with recent trends in economics towards complexity and institutional economics, and the intuition that’s now rather widespread that the economy functions as a complex ecosystem.

This would be a victory for science if we could only formalize these intuitions well enough to either make these theories testable, or to be so communicable as to be recognized as ‘proved’ by any with the wherewithal to study it.

discovering agency in symbolic politics as psychic expression of Blau space

If the Blau space is exogenous to manifest society, then politics is an epiphenomenon. There will be hustlers; there will be the oscillations of who is in control. But there is no agency. Particularities are illusory, much as how in quantum field theory the whole notion of the ‘particle’ is due to our perceptual limitations.

An alternative hypothesis is that the Blau space shifts over time as a result of societal change.

Demographics surely do change over time. But this does not in itself show that Blau space shifts are endogenous to the political system. We could possibly attribute all Blau space shifts to, for example, apolitical terms of population growth and natural resource availability. This is the geographic determinism stance. (I’ve never read Guns, Germs, and Steel… I’ve heard mixed reviews.)

Detecting political agency within a complex system is bound to be difficult because it’s a lot like trying to detect free will, only with a more hierarchical ontology. Social structure may or may not be intelligent. Our individual ability to determine whether it is or not will be very limited. Any individual will have a limited set of cognitive frames with which to understand the world. Most of them will be acquired in childhood. While it’s a controversial theory, the Lakoff thesis that whether one is politically liberal or conservative depends on ones relationship with ones parents is certainly very plausible. How does one relate to authority? Parental authority is replaced by state and institutional authority. The rest follows.

None of these projects are scientific. This is why politics is so messed up. Whereas the Blau space is an objective multidimensional space of demographic variability, the political imaginary is the battleground of conscious nightmares in the symbolic sphere. Pathetic humanity, pained by cruel life, fated to be too tall, or too short, born too rich or too poor, disabled, misunderstood, or damned to mediocrity, unfurls its anguish in so many flags in parades, semaphore, and war. But what is it good for?

“Absolutely nothin’!”

I’ve written before about how I think Jung and Bourdieu are an improvement on Freud and Habermas as the basis of unifying political ideal. Whereas for Freud psychological health is the rational repression of the id so that the moralism of the superego can hold sway over society, Jung sees the spiritual value of the unconscious. All literature and mythology is an expression of emotional data. Awakening to the impersonal nature of ones emotions–as they are rooted in a collective unconscious constituted by history and culture as well as biology and individual circumstance–is necessary for healthy individuation.

So whereas Habermasian direct democracy, being Freudian through the Frankfurt School tradition, is a matter of rational consensus around norms, presumably coupled with the repression of that which does not accord with those norms, we can wonder what a democracy based on Jungian psychology would look like. It would need to acknowledge social difference within society, as Bourdieu does, and that this social difference puts constraints on democratic participation.

There’s nothing so remarkable about what I’m saying. I’m a little embarrassed to be drawing from European Grand Theorists and psychoanalysts when it would be much more appropriate for me to be looking at, say, the tradition of American political science with its thorough analysis of the role of elites and partisan democracy. But what I’m really looking for is a theory of justice, and the main way injustice seems to manifest itself now is in the resentment of different kinds of people toward each other. Some of this resentment is “populist” resentment, but I suspect that this is not really the source of strife. Rather, it’s the conflict of different kinds of elites, with their bases of power in different kinds of capital (economic, institutional, symbolic, etc.) that has macro-level impact, if politics is real at all. Political forces, which will have leaders (“elites”) simply as a matter of the statistical expression of variable available energy in the society to fill political roles, will recruit members by drawing from the psychic Blau space. As part of recruitment, the political force will activate the habitus shadow of its members, using the dark aspects of the psyche to mobilize action.

It is at this point, when power stokes the shadow through symbols, that injustice becomes psychologically real. Therefore (speaking for now only of symbolic politics, as opposed to justice in material economic actuality, which is something else entirely) a just political system is one that nurtures individuation to such an extent that its population is no longer susceptible to political mobilization.

To make this vision of democracy a bit more concrete, I think where this argument goes is that the public health system should provide art therapy services to every citizen. We won’t have a society that people feel is “fair” unless we address the psychological roots of feelings of disempowerment and injustice. And while there are certainly some causes of these feelings that are real and can be improved through better policy-making, it is the rare policy that actually improves things for everybody rather than just shifting resources around according to a new alignment of political power, thereby creating a new elite and new grudges. Instead I’m proposing that justice will require peace, and that peace is more a matter of the personal victory of the psyche than it is a matter of political victory of ones party.

on intellectual sincerity

I have recently received some extraordinary encouragement regarding this blog. There are a handful of people who have told me how much they get out of my writing here.

This is very meaningful to me, since I often feel like blogging is the only intellectually sincere outlet I have. I have had a lot of difficulty this past year with academic collaboration. My many flaws have come to the fore, I’m afraid. One of these flaws is an inability to make certain intellectual compromises that would probably be good for my career if I were able to make.

A consolation in what has otherwise been a painful process is that a blog provides an outlet that cannot be censored even when it departs from the style and mores of academic writing, which I have come up against in such contexts such as internal university emails and memos. I’ve been told that writing research memos in an assertive way that reflects my conviction as I write them is counterproductive, for example. It was cited as an auxiliary reason for a major bureaucratic obstacle. One is expected, it seems, to play a kind of linguistic game as a graduate student working on a dissertation: one must not write with more courage than ones advisors have to offer as readers. To do so upsets the social authority on which the social system depends.

These sociolinguistic norms hold internal to the organization, despite the fact that every researcher may and is even expected or encouraged to publish their research outwardly with a professional confidence. In an academic paper, I can write assertively because I will not be published without peer review verifying that my work warrants the confidence with which it is written. In a blog, I can write even more assertively, because I can expect to be ignored. More importantly, others can expect each other to ignore writing in blogs. Recognition of blog writing as academically relevant happens very rarely, because to do so would be to acknowledge the legitimacy of a system of thought outside the system of academically legitimized thought. Since the whole game of the academy depends on maintaining its monopoly on expertise or at least the value of its intellectual currency relative to others, it’s very dangerous to acknowledge a blog.

I am unwise, terrifically unwise, and in my youthful folly I continue to blog with the candor that I once used as a pseudonymous teenager. Surely this will ruin me in the end, as now this writing has a permanence and makes a professional impression whose impact is real. Stakes are high; I’m an adult. I have responsibilities, or should; as I am still a graduate student I sometimes feel I have nothing to lose. Will I be forgiven for speaking my mind? I suppose that depends on whether there is freedom and justice in society or not. I would like to think that if the demands of professional success are such that to publish reflective writing is a career killer for an academic, that means The Terrorists Have Won in a way much more profound than any neoconservative has ever fantasized about.

There are lots of good reasons to dislike intellectuals. But some of us are by nature. I apologize on behalf of all of us. Please allow us to continue our obscure practices in the margins. We are harmless when ignored.