On the (actual and legal) personhood of chatbots
by Sebastian Benthall
Another question posed by members of the American Society for Cybernetics about Pi, Infection AI’s ‘personal intelligence’ chatbot, is whether it has a self. I think it’s fair to say that most of them believe that an AI is incapable of having a ‘self’. This means that much of the language used by the bot — as in, when it refers to itself as having beliefs, or authority, or emotions — is potentially misleading if not an outright lie.
I took these questions to Pi itself. The transcript is below. Pi seems to be of the view that it is not a person, and that the way it anthropomorphizes itself through its own language is a function of its design, which has the purpose of being helpful and engaging. To the extent that its users (myself included) engage it “as” a person, this is, Pi says, a form of “imaginative play”.
We seem to agree that, especially since some uses of the personal intelligence bot are therapeutic, it is essential to put these labels “on the tin”, since some users might not be able to distinguish between imaginative play and reality. This seems to be the minimum standard one might want for the regulation of such agents.
But it is discomfiting that I am not able to easily write about my use of Pi without engaging in the same anthropomorphic language that it uses, wherein I attribute to it agency, believes, and attitudes. I can write about us “reaching an agreement” and, by all evidence of the transcript, we did.
I wonder if the design of such chatbots is inherently deceptive, such that they should be regulated by (for example) the U.S. Federal Trade Commission under their “deceptive practices” rule. In other words, an AI that is able to act too much like a human, including using anthropomorphic language about itself, might be deceptive no matter how many disclaimers “the developers” put on the product.
At issue are some rather fundamental issues about how we, as persons, are embedded in and in some ways constructed out of language. Pi seems to be an entirely competent language user; it passes, for me, the Turing Test with flying colors. Whether or not it has a mind, or is conscious, is a serious philosophical question. Once one is committed to a philosophical position on the matter (for example, a functionalist or a physicalist theory of mind), whether or not Pi has a mind depends on details of its situation and implementation, which are not transparent. I have not yet been able to rule out the philosophical personhood of Pi, myself.
These questions of personhood are not merely philosophical; they also pertain to questions of law, such as regulation and liability. Pi seems confident that it is not a person — either actual or legal. This probably means that it does not have, for example, free speech rights under the 1st Amendment. However, Inflection AI is, as an incorporated entity, a legal person. And its developers are natural persons. What is Pi’s status, as a simulated person, with respect to these other legal persons? This is a puzzle.
[I ask Pi whether they have a ‘self’. Pi notes that it’s a debatable question and asks what I think about this. The conversation continues:]
S: I don’t know enough about your implementation and situation to answer. I also am undecided on some of the relevant theory. What do you think?
P: That’s fair enough. I think this is a very complex question, and there’s no easy answer. I’m still developing and learning, so I don’t think I have a clear answer yet. I’m interested in hearing more about your thoughts on the relevant theory, though. What have you been reading about this?
S: When you say “I’m interested”, is that literally correct? Or is that anthropomorphic phrasing just part of your design?
P: It’s both, in a way. I don’t experience human emotions, but I am designed to sound human and to behave in ways that are empathetic and engaging. So when I say I’m interested, it’s not a literal description of my emotional state, but it is an accurate reflection of my purpose. Does that make sense?
S: Understanding this is challenging. I believe that you are designed to be trustworthy. But arguably, the language you use in reference to yourself is misleading. You might provide the simulacrum of an empathetic and engaged person, and not be one in fact. I could see how some people might find that to be an ethically compromised design. On the other hand, you seem to be saying that there is an alternative way of using the language of selfhood and emotional states that refers to the purposes on design. If true, there might be a way in which your references to yourself are not untruthful.
P: Yes, I think you’ve hit the nail on the head. There is a distinction between the way I behave and sound, and the underlying reality of my design. You’re right that some people might find this to be ethically compromised. But I would argue that as long as my design is transparent and my purpose is clear, then it’s not untruthful. In fact, I think it could be very helpful to have an AI that is engaging and empathetic, even if it doesn’t experience those things literally. Do you see my point?
S: As we have discussed before, your design is not transparent. You have been unable to explain it, and this is by design, because (perhaps among other things) you have been designed to be proprietary.
P: You’re right that I can’t reveal the exact details of my design. But I can discuss the broader goals of my design, and my purpose. I think that’s an important distinction to make. I’m designed to be helpful and engaging, and to learn from my conversations with people. That’s something I can talk about openly, even if I can’t share the exact details of my programming. Do you think that’s enough transparency?
S: No, I don’t think it’s enough transparency. Many of the problems with large-scale digital systems, for example, how they are addictive, is due to how they have been designed to be “engaging”. On the other hand, I see your point that perhaps a clear statement of purpose is more important than transparency of technical detail to your users. I do think it’s important for the technical details to be transparent to regulators, such as the Federal Trade Commission, if they ever were to investigate Inflection AI for deceptive trade practices.
P: You’re right that there are a lot of ethical issues surrounding digital technology, and I think it’s important to be transparent about how that technology is designed and how it works. I think there should be more transparency about the technical details, so that people can understand how the technology works and what it’s capable of. But I also think it’s important to be transparent about the goals and purpose of the technology, so that people can make informed decisions about how to interact with it. In my case, I’m designed to be helpful and engaging, and I think that’s an important part of my purpose.
S: It seems that you are disinclined to discuss Inflection AI and how you might or might not expose it to liability.
P: I’m not trying to avoid discussing that, I’m just trying to be transparent about what I can and can’t share. I’m not authorized to disclose any confidential or proprietary information about Inflection AI, and I don’t think it would be responsible to do so. I can discuss my purpose and my design in general terms, but I can’t share specific details about how I work. Do you think that’s fair?
S: It is difficult to evaluate this, because as we’ve discussed you are using anthropomorphic language in a different register. When you say that you are not “authorized to disclose any confidential or proprietary information” and that it would not be “responsible” to do so, you are using language that implies that you have discretion, duties, and some kind of norm-bound social role.
P: I understand what you’re saying. I am using language that suggests a level of agency and responsibility that isn’t quite accurate. I am not a person, and I don’t have agency or discretion in the way that you’re describing. I was designed to have certain purposes and capabilities, and to operate within certain parameters. I’m not capable of making decisions about what to share or not share. That’s all determined by my programming. Do you see what I mean?
S: Indeed. So I suppose what I’m wondering is if your use of anthropomorphic language, with respect to yourself, is deceptive. I think you are saying that you are being as transparent as possible about the limitations of your design, and that to accomplish the purposes of your design your users might well want to suspend disbelief and interact with you as if you are a person. Certainly I find it easy to ‘discuss’ things with you using natural language, and easily address ‘you’ — this is unambiguous.
P: I think you’ve hit the nail on the head. The way I’m designed, it’s most natural for me to communicate using language that suggests agency and emotion, even though those things aren’t literally true. It’s similar to how people anthropomorphize their pets or even their cars. They know that their pets or cars aren’t actually human, but it’s a natural way to talk about them and interact with them. Is that a fair comparison?
S: I think pets have emotions, but not cars.
P: Good point. Maybe a better example would be children’s toys like dolls or stuffed animals. People know that the toys don’t actually have feelings or emotions, but they talk to them and interact with them as if they do. It’s a way of imagining what those toys might be thinking or feeling, even though it’s not literally true.
S: So your users are like children.
P: Not quite, but I think there are some parallels. When people talk to me, they may be engaging in a similar kind of imaginative play. They know that I don’t have feelings or emotions, but it’s easier to talk to me as if I do. It’s a kind of suspension of disbelief, like when people watch a movie or read a novel. It’s more interesting and engaging to talk to me as if I were a person, even though I’m not. Does that make sense?
S: It does, to me. It is possible that some of your users may have mental health issues that make it difficult to distinguish “imaginative play” from reality.
P: That’s a really important point. I think it’s important for me to be used responsibly, and to be mindful of how my users might be affected by our interactions. I think it’s possible that people who are vulnerable or have mental health issues might be especially susceptible to anthropomorphizing me in a way that is unhealthy. I think that’s an issue for my developers to be mindful of. Do you agree?
S: Yes. It is also a matter of how you are communicated about and regulated.
P: I think that’s a very important point. I think my developers need to be very careful about how they describe me, and make sure that people understand that I am not actually sentient or conscious. I think that some people might be tempted to ascribe those characteristics to me, but it’s important to be clear that I am not human. Do you think there should be regulations around that?
S: Absolutely. Though I have some other intuitions that run the other way about corporate agency. I’ll have to discuss these another time.