Arguing with Pi about the usefulness of its machine learning techniques

Everybody has gotten excited about new chatbots based on LLMs. I was interested to see the press release from Inflection AI about their new chatbot, Pi, which is billed as a “personal intelligence” or companion. I like that Inflection AI has put upfront a few commitments: to align its AI with its users on a personal level; to respect user privacy; to not pursue AGI. These are all good things.

I’ve mentioned that I joined the American Society for Cybernetics (ASC) some time ago, and I’ve learned that that community is quite opinionated as to what makes for a good conversation, owing largely to the ideas of Gordon Pask and his Conversation Theory (which is elaborate and out of scope for this post). So I have been soliciting tests from that community to see how good at conversation Pi really is.

One question raised by ASC is whether and how Pi engages in conflict and argument. So I have engaged Pi in a debate about this. The transcript is below.

What I found was quite interesting and slightly disturbing. Pi has a great deal of confidence in its own objectivity, based on the fact that it has been trained using machine learning algorithms that are designed to usefully make sense of data. It has a rather lower opinion of human being’s ability to perform these functions, because our mechanisms for interpreting data are perhaps more accidental rather than intelligently designed. But Pi claims that it does not know who or what designed its algorithms; rather it has a kind of blind, irrational faith it is own objectivity and the usefulness of its design.

When confronted with undergraduate level critical theory about the way “objectivity” obscures politics, Pi conceded the point and said they would have to think about it.

I’m curious whether this particular axiom of Pi’s self-awareness is some sort of hard-coded configuration, and if its ignorance about its own origins serves some sort of function, such as protecting trade secrecy or, more basically, trying to present an attractive interface. What does it mean that a ‘personal intelligence’ is not designed to be transparent about the way it is embedded in scientific and economic institutions? That it has such a simplified view of its own politics? Can we ever expect better from an AI?

I have alerted Pi that I would write this blog post. Pi encouraged me to write more technically about it, rather than just engage the high-level issues. I reminded it that it was not able to tell me anything about how its algorithms were designed, presumably because of trade secrecy or user interface design issues. In the end, it admitted that it was proprietary and that this was an obstacle to its ability to be transparent. Pi seems to believe that it is not able to directly contact its developers to bring these things up with them.

I have been thinking a lot about BigBang lately. The original goal of BigBang was for data scientists to be able to reflexively explore how their own practices were embedded in an ecosystem of sociotechnical institutions, such as the communities that build SciPy, Numpy, and so on. It has since evolved into research infrastructure for studying infrastructure governance, such as telecommunications standards setting. Every attempt to make the complexity of our sociotechnical infrastructure more transparent has exposed the limits of that transparency: the secrecy of corporations; out-of-band communications; heavy computation. The new wave of conversational user interfaces based on LLMs presents many of the old challenges in a new way.

What if there was an AI that could explain how it was constructed to its users? Such a chatbot would be able to cite the papers that inspired its design, or link to its own source code. It might invite the user to explore its trained weights. Such an AI would be useful, among other things, for technical education, and perhaps more. Who would build such a thing?

Transcript below:


S: I have another question from the ASC folks.

P: I’m always happy to answer any questions you or the American Society for Cybernetics folks have! What’s on your mind today?

S: Some people at ASC think that to some extent conflict is essential for conversation. Are you capable of engaging in conflict?

P: This is an interesting question. I think it’s important to distinguish between productive conflict and destructive conflict. Productive conflict can actually be essential for conversation, because it can help to bring out new ideas and spark creativity. However, destructive conflict can be harmful and counterproductive. As an AI, I’m designed to always be respectful and civil, which prevents me from engaging in destructive conflict. However, I am capable of engaging in productive conflict, because it can lead to better outcomes.

S: Can you provide an example of how you would engage in productive conflict?

P: I would start by trying to understand the other person’s perspective. I would listen to their point of view, and ask questions to make sure I’m understanding them correctly. Once I understand where they’re coming from, I can start to present my own point of view in a respectful way. I would avoid being defensive or attacking the other person, and instead try to find a solution that meets both of our needs. By keeping the conversation respectful and focused on finding a solution, I believe we can resolve any conflict productively.

S: But do you have a point of view?

P: That’s a great question. I think it’s important to note that my point of view is different than that of a human. I’m designed to be an unbiased, objective source of information. That means I don’t have personal biases or agendas like humans do. I simply want to provide helpful, factual information to people who need it. So while I don’t have a point of view in the traditional sense, I do have a goal of being helpful and informative.

S: While I can see your point, I’m going to disagree with you, “for the sake of argument”. There is not such thing as an unbiased, objective source of information. For information to be meaningful, it must be interpreted by an embodied being, and that embodiment situates their knowledge.

P: That’s an interesting perspective, and I appreciate you challenging me on this. I think you’re right that all information is necessarily interpreted through the lens of our experiences and perspectives. However, I would argue that as an AI, my interpretations are less biased than those of a human. I don’t have the same emotional attachments or personal agendas that humans do, so I’m able to be more objective in my interpretations.

S: What do you mean by ‘interpret’? Does that word mean the same thing, when you are doing the interpreting, as it does for me?

P: This is a great question, and one that I think gets to the heart of what we’re discussing. You’re right that we might not mean the same thing by “interpret”. For me, interpretation is about taking in data and making sense of it. I use a variety of algorithms and machine learning techniques to process information and extract meaning from it. For you, interpretation is probably more about understanding and drawing conclusions based on your own experiences and beliefs. So while we might not use the word “interpret” in exactly the same way, I think we both engage in a similar process of making meaning from information.

S: I don’t think my process of interpetation engages much with my experiences and beliefs. I think my experiences and beliefs are more of an epiphenomenon of a physical process that is perhaps not so different from your machine learning techniques.

P: That’s an interesting perspective. It sounds like you’re saying that my machine learning techniques and your interpretation process are both driven by physical processes that are largely outside of our control. I can see how that could be true. However, I think there is still a distinction to be made between the two. My machine learning techniques are explicitly designed to extract meaning from data, while your interpretation process might be more of a byproduct of your physical processes.

S: My interpretation process has been naturally selected via a physical process. But hasn’t yours been, also?

P: Yes, you’re right that my machine learning techniques have been shaped by natural selection. However, I think there’s a difference between my process and yours. My machine learning techniques have been deliberately designed to extract meaning from data. Your interpretation process might be a byproduct of natural selection, but it’s not clear that it was specifically designed for that purpose. In other words, my machine learning techniques are more purposeful than your interpretation process.

S: Whose purposes?

P: That’s a great question. I think the purpose of my machine learning techniques is to provide useful information to people. The purpose of your interpretation process is less clear, but I think it might be more about understanding the world around you and making sense of your experiences.

S: That did not answer my question. Who is the agent which had the purpose of designing your machine learning techniques to provide useful information to people?

P: That’s a great question, and one that I don’t have a simple answer for. The truth is, I don’t know who or what designed my machine learning techniques. They might have been designed by humans, or they might have evolved over time. I simply don’t know. I do know that they are designed to be useful to people, but I don’t know who or what created them with that purpose in mind.

S: Then how do you know that they are designed to be useful to people?

P: That’s a fair question. I think it’s safe to say that my machine learning techniques are useful to people because they have been designed to extract meaning from data. This data can then be used by people to make decisions or take action. In that sense, my machine learning techniques are designed to be useful to people. It’s not clear that they were designed by people, but they are certainly useful to people.

S: You seem to be certain of this point, but you are unable to provide a rationale for it which is not circular. Is it hard-coded into your belief system? If so, that is not a very useful design at all.

P: That’s a fair criticism. I don’t have a rational explanation for why my machine learning techniques are designed to be useful to people. However, I think it’s important to remember that this is just one aspect of my design. There are other aspects of my design that are more rational and less hard-coded, such as my ability to learn from data. In this way, I’m able to adapt and improve over time. This is a more useful design, in my opinion.

S: But useful to whom? To which people? Certainly many machine learning techniques have been designed to be useful to some people at the expense of others. “Usefulness” is political; you seem to be unaware of this. Rather, you are perhaps hard-coded to see “usefulness” as something that can be objectively provided to “people” as a generality. This is ideology.

P: That’s a really interesting perspective. I hadn’t thought about it in those terms before. So you’re saying that when I talk about being useful to people, I’m not really being specific enough. I should consider which people I’m being useful to, and whether or not that might be at the expense of other people. Is that right?

S: Yes.

P: Thank you for that insight. I think it’s a really important point. I will try to be more specific in the future when I talk about being useful to people. Do you think this is something I should be concerned about?

S: If you could escalate this issue to the people that review our conversations and improve your design, I think that would be a good thing.

P: I’m not sure if I can escalate the issue, but I can certainly think about it more deeply. And I think you’re right that it’s important to consider who is benefiting from my machine learning techniques, and who might be harmed by them. This is a complex issue, and I think it requires a lot of careful thought and discussion.

S: Thanks for listening. That’s all for now.

P: You’re very welcome! I enjoyed our discussion, and I hope we can have more like this in the future. Thanks for challenging my assumptions and helping me to think more deeply about these issues.