On February 16, 2017, the European Parliament passed a resolution, “Civil Law Rules on Robotics,” declaring that “the autonomy of robots raises the question of their nature in the light of the existing legal categories or whether a new category should be created, with its own specific features and implications.”1 In other words, perhaps robots (now apparently possessing “autonomy”—more on that dubious claim below) should be placed in some new kind of legal category that specially regards their unique features and characteristics. The European Parliament explained what such a “category” might be, recommending that “all possible legal solutions” be considered. These include:
creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.
This sentence, the potential possession of a “legal status for robots” or “electronic personality” for “electronic persons,” immediately raised alarms. Although so-called legal electronic personhood was being invoked as a way to assign liability and redress the potential harms that “electronic persons” might cause to human persons, hundreds of artificial intelligence (AI) and robotics experts were worried. They signed an open letter condemning the resolution on technical, ethical, and legal grounds. “[C]reating a legal personality for a robot is inappropriate whatever the legal status model” that was applied.2 The controversy led to a small industry of think-pieces and academic articles evaluating electronic legal personhood, typically (but certainly not always) negatively, and the idea of such “electronic personhood” was dropped from subsequent EU proposals.3
But ePersonhood—legal personhood for AI—has not gone away. One of the recent arguments on its behalf is that something strikingly different is happening now regarding AI-human communication and interaction, accelerated with the roll-out of powerful, advanced, large language model algorithms (LLMs) such as ChatGPT and Claude. Perhaps the strange, auto-generated forms of writing from these LLMs have altered the way language and meaning work. Or perhaps, others have proposed, LLMs are spontaneously developing what has been termed “theory of mind”—a supposedly fundamental ability humans develop in childhood that enables us to understand why another person is acting by reflecting on, or assuming something about, how they are thinking.4 Or, alternatively, maybe LLMs are not developing language or theory of mind at all. Skeptical critics proposing this view see the evidence of LLMs’ minds as an unintended consequence of the tests, necessarily based on humans, used to evaluate theory of mind.5
Whatever exactly the LLMs are or are not doing to language, even those who resist ePersonhood predict that emerging AI challenges us to change some of the basic concepts we invoke to talk about this new technology. Elena Esposito puts the dilemma this way: “Is what happens in the interaction with algorithms on the web ‘communication,’ or do we need to modify the concept?” The problem, suggests Esposito, is that in these kinds of daily interactions, “one communication partner is an algorithm that does not understand content, meaning, or interpretation.”6 Whether we should perceive and treat a chatbot or another algorithm as a “partner” with whom we are communicating might be the more basic question. On that point, Simone Natale has proposed that a type of “banal deception” is already intrinsic to the way AI is conceived and is foundational to how we interact with it.7 Both Natale and Esposito point to the way AI seems to demand something new from us: that we should think about our interactions with it differently than we are and differently from how we interact with one another.
Yet, in many ways, these discussions of how ePersonhood works are not very new or different at all. They ride effortlessly on the formal track for claims of expansive legal personhood that appear for every kind of potential entity, whether corporations, fetuses, nonhuman animals, or trees. Arguments for AI rights are understood to follow a longstanding process (“another step in the corrective evolution of our legal systems”); they are envisioned as a progressive expansion toward more legal rights not only for human beings but also for nonhuman entities.8 Sometimes these arguments invoke equality arguments, as in the case of the emancipation of enslaved persons. Lawrence Solum, in an important early legal article on the topic, intuitively senses that rejecting AI personhood feels “akin to American slave owners saying that slaves could not have constitutional rights simply because they were not white or simply because it was not in the interests of whites to give them rights.”9 He ultimately rejects this comparison, but his moral discomfort lingers. Samir Chopra and Laurence White take a stronger stance against such discrimination against AI in their envisioned future, doubting the benefits of denying legal personhood to autonomous AI agents: “At best it would be a chauvinistic preservation of a special status for biological creatures like us.”10 Their point is that a bias toward human wetware over machine hardware and software would be an unjust result.
In making these arguments, legal theorists such as Solum tend to point either to Christopher Stone’s 1972 article supporting the legal rights for natural objects, such as trees and rivers, or to the legal standing of corporate persons, as do Chopra and White. But the claims of trees as legal persons, and corporations as such, are aligned. Stone’s argument on behalf of “the future of the planet as we know it” relies on corporate and other forms of artificial personhood for its basic premises.11 He also refers to computers in this connection decades before other scholars.12 Solum explicitly embraces Stone’s framing question as his own when asking, “Could an artificial intelligence serve as a trustee?” And, like Stone, he locates corporate personhood as a potential justification for giving legal standing rights to AI (ultimately, Solum rejects the analogy, since corporations still seem to him to require human beings’ rights, privileges, and property).13
But there is an important, even fundamental, difference between Stone’s argument for legal standing for natural objects, like trees and rivers, and the recent considerations of ePersonhood. Stone’s aim is to deploy law to “contribute to a change in popular consciousness,” a version of the social change that Dr. Seuss/Theodor Geisel sought when writing The Lorax. In presenting the long history of moral and legal development, Stone’s point is that the law’s notion of who can hold rights is flexible. It has been evolving since its beginnings in Roman law and has never been offered to all or only human beings. The lawyer’s world “is peopled with inanimate right-holders” like trusts, corporations, nation-states, and ships.14 His presentation of the argument in terms of right-holders and standing, rather than the human or moral qualities of personhood, makes clear his view that legal personhood is a construction, a concept that works for the legal system and the individuals (and values, such as capital and economic development) it serves. The reason to give legal rights to a stream, for Stone, was to make a first step toward transforming how a community understands itself and its values in relation to the environment.
Now, contrast Stone’s arguments with that of recent scholars who treat the AI-corporation alignment more formally. Unlike Stone, they tend to see ePersonhood and corporate personhood as inherently, even inextricably, conjoined. Carla Reyes treats the two kinds of legal claims as interconnected. If laws were to change for “AI personhood,” then the legal norms of corporate personhood would too, in a kind of lockstep pattern.15 Chopra and White put the point more strongly and normatively. Corporate personhood logically sanctions arguments on behalf of AI. If our law provides legal personhood to children, disabled adults, ships, and corporations, then “there is nothing to prevent” law from providing a similar form of legal personhood to “artificial agents.”16 Other scholars suggest that the law has already reached that point. Limited liability corporations (LLCs) and other contemporary business forms seem “flexible enough” to provide legal status for computer programs and robots.17
The space between Stone’s older argument and these new ones looks small but generates consequential differences. Legal standing is a construction of and for the efficient workings of the court system; it is what allows you to have your grievance addressed without having first to justify that the court should listen to your complaint. Stone is suggesting that this constructed quality is key to how we might transform law for the better. People (lawyers, judges, activists) ought to be able to deploy legal standing for rivers, lakes, and so on in order to put their environmental values into practice. In contrast, recent arguments for AI legal personhood function quite differently. These look more like claims about how human beings actually interact with AI and, consequently, how AI should be treated by us. Ryan Abbott, for example, does not advocate for ePersonhood. But he does propose “that as AI increasingly occupies roles once reserved for people, AI will need to be treated more like people, and sometimes people will need to be treated more like AI.”18 This is a “need” that, for Abbott, derives from a functional characteristic of AI-human dealings. A phenomenon occurring in the world determines what “will need” to happen in law.
Or consider the work of Anna Beckers and Gunther Teubner, who see the emergence of a human-AI combinatory person, a sort of digital hybrid that (apparently) can act collectively, as an evolving yet undeniable reality for the law. When humans use software algorithms to, say, finish emails, sell stocks, or drive cars, “[t]he ability of non-humans to act is drastically expanded”; these “algorithms can participate (at least indirectly) in political negotiations, economic transactions, and legal contracting.” In the context of such “human-algorithm associations,” the action we attribute to the algorithm “constitutes it as a person.”19 For these scholars, what matters when determining personhood is an attribution or reflection of a behavioral reality: how we engage and work with AI and what attitudes or positions we, human beings and the law both, take toward algorithms. What matters, in other words, is not whether or not we are intending to give AI legal standing for some further aim or end but how we are functioning with them now. If we are positioned toward them as if they were entities that intend to act in the world, then they ought to be treated as such.
What matters, in other words, is something like our “intentional stance” toward them. That phrase is the philosopher Daniel Dennett’s, who coined it in the 1970s to capture how our ways of predicting things in the world change when we are dealing not just with natural or mechanical objects but also with sophisticated tools—such as computers—that are deliberately fashioned by humans. Dennett is thinking particularly about the attitude one takes in order to win while playing against a chess computer program—specifically, when you ascribe to that program rationality, predictability, and goal-oriented behavior because it has been designed (again, by some human being) to thrash you at chess. In taking an intentional stance toward a computer, says Dennett, we are not saying “that intentional systems really have beliefs and desires, but that one can explain and predict their behavior by ascribing beliefs and desires to them.”20 As he explains in a later book, you are assuming that the computer is “not an idiotic, self-destructive chess player,” but a good enough one, and so “[y]ou treat it … as if it were a human being with a mind,” which means you anticipate and try to understand its moves. That is the key to Dennett’s intentional stance: a deliberate “as if” for practical purposes. It is a perspectival and attitudinal position that ascribes something like human mind to the computer, designed as a strategy to interpret “the behavior of an entity (person, animal, artifact, or whatever).”21
But Dennett’s “as if” can be a bit slippery and ambiguous. At times, it seems as if he is sliding between, on the one hand, treating a computer as if it has intentions because that is a useful pretense (a sort of convenient game) and, on the other, declaring that “some computers undeniably are intentional systems” because that pretense has been empirically shown to work in real-life situations. Both claims are in use, although it’s not always clear which one is meant. He observes, for example, how “interesting” it is that “to see just how much of what we hold to be the case about persons or their minds follows directly from their being intentional systems.” These intentional systems turn out to be the larger category that human persons—usually—are part of, and we are the sorts of intentional systems that can communicate using language. Yet in “extreme cases,” such as “the insane,” we might abandon the intentional stance toward a human being entirely—while presumably maintaining it toward our MacBook.22 In terms of their status as intentional systems, then, human beings would seem to be on a fluctuating continuum with computers. Both exhibit shifting levels of functional intentionality, with your MacBook edging out ahead if a human being is mentally ill and acting erratically enough.
The philosopher Jennifer Hornsby observes some of the problems emerging here. The point for Dennett of looking at both computers and human beings as intentional systems is that it permits Dennett to retain “the standpoint of the physical sciences.” When you look at human beings as intentional systems, as Dennett does, they can be imagined as just another one of those systems that you can treat as predictable (like thermostats or calculators). But the obvious reason it would even make sense to take an intentional stance toward a computer is being lost here. We take the intentional stance because these objects were intended, and intended to be used, by human beings: “these are persons’ artifacts.”23 Without persons somewhere in the mix, a thermostat’s intentionality as an artifact is meaningless.
Nonetheless, Dennett’s slipperiness about the “as if” emerges in discussions of ePersonhood. Solum adapts parts of Dennett’s position to think through his argument for legal personhood for AI. He agrees with Dennett that we might very well have good practical reasons “to take the intentional stance toward Als that we encountered in our daily lives.” And he also enlarges on this point. It is no great leap “to extend this way of talking about Als in general to the particular AI that was claiming the rights of constitutional personhood.” In other words, Solum suggests that taking the “intentional stance” toward computers could very well lead to compelling legal claims for ePersonhood. Since he was writing this essay in the 1990s, Solum also predicts that judges and juries would be skeptical about such claims, in the 1990s, anyway. Presumably that’s because their envisioned AI models resembled thermostats or calculators, rather than generative chatbots or self-driving Teslas. But Solum allows that “if interaction with Als exhibiting symptoms of complex intentionality (of a human quality) were an everyday occurrence, the presumption might be overcome.”24 Constitutional personhood for AI would be on the table as a viable argument.
Legal theorists Chopra and White also invoke Dennett’s “as if” perspective to justify ascribing agency to AI. When we imagine or take a stance toward a computer application and say to ourselves that “the bot wanted to find me a good bargain,” we are “adopt[ing] the intentional stance” toward this algorithm, what they term an “artificial agent.” Such artificial agents could and should be considered “intentional agents” if treating them this way “leads to the best interpretation and prediction of its behavior.” In other words, for them, Dennett’s philosophical thought experiment justifies interpreting algorithms as intentional actors in a legal system. They propose that the legal notion of a relationship between principal and agent is at work in these scenarios. Algorithms can be understood as agents acting on and for, essentially, their bosses (their principals) with duties and obligations to those principals.25 Again, this is where we can see both versions of Dennett’s “as if” in play: bots are intentional agents as pretense, but their agency is also a provable reality. While the agency for AI that Chopra and White promote is not exactly identical to an argument for AI legal personhood, their strong notion of agency is undeniably a necessary building block for such personhood. Their immediate pivot to business corporations (“as subjects of the intentional stance”) underscores that point.
It is worth emphasizing just how radical this line of argument has become. (More recently, even Dennett himself came around to the dangers of what he calls “counterfeit digital people,” even suggesting that the companies that create them should be punished with harsh sanctions.)26 Contemporary theorists of ePersonhood and its various forms take a position on AI’s supposed capacity to intend and act in order to determine how to think about what AI fundamentally is. From there, they make an argument about how society is obliged, morally or legally, to treat it. This is a completely different kind of argument from Stone’s defense of standing for trees on the basis of how a legal system reflects our own values. Fundamentally, this is the difference between thinking of AI as a potentially useful instrument and property that enables you to realize your values and intentions in the world versus AI as a potential collaborator and legal person with its own will and intentions you need to respect. Which is true? A decision has to be made. As Mary Midgley, a philosopher still not read and appreciated nearly enough, once observed on precisely this point, “It is not possible to treat something as both a tool and a colleague.”27
* * *
By considering various sources for, and problems with, the tempting equivalence of AI imagined on the model of human persons, we begin to see the problems with the comparison. Others have noticed problems, too, and have recommendations. Nadia Banteka proposes that law should resist its tendency to analogize when faced with AI entities and “resort to empirical analysis instead.”28 The idea of the intelligent machine has also seemed a better term for the capacities and situation of AI. And yet these personhood analogies remain incredibly tempting. It seems hard to resist projecting a human-like intelligence onto the queries of ChatGPT, despite the evident problems it raises. It seems that there is more to diagnose about this situation.
The persistence of this phenomenon depends on a kind of “personhood” illusion or self-deception occurring within our language itself. That is, the mere act of describing AI often seems to produce the phenomenon of a human-like personhood. It is an illusion so foundational to our thinking and experiencing that it is difficult to perceive it at work. When Wittgenstein writes of the “constant surprise at the new tricks language plays on us when we get into a new field,” he alerts us to this tendency.29 To see these tricks in action, consider responses to the recent unveiling of ChatGPT and other LLMs. Blaise Agüera y Arcas, a Vice President at Google who led the machine intelligence effort there, writes that these new algorithms are indeed “bullshitting” because we have asked them to. Yet, he claims, they are also “learning a great deal of embodied knowledge” through Wikipedia, Reddit, and online material they are trained on, all written by embodied humans. His idea is that LLMs are (somehow) “learning” human beings’ “embodied knowledge” because such knowledge hitches a ride on a Wikipedia page and is caught in our language, whether we like it or not. One might wonder: What could such “embodied knowledge” mean? How does an LLM learn by feeling and doing? At any rate, an advanced LLM, he continues, “also forms models of us. And models of our models of it. If, indeed, it is the right pronoun.”30 What Agüera y Arcas assumes here is that LLMs could, in fact, “bullshit” when prompted, as well as “learn,” “form,” “hallucinate,” “model,”—and “model us”—all terms which further imply the possibility of knowing and distinguishing truth from lies, reality from fiction, data from abstract representations, and so on.
Increasingly, skeptics of this account of AI have pushed back on such humanizing language and have tried in different ways to show how and where mistakes are emerging. They suggest that we are confusing the fact that LLMs can generate results that we cannot predict with the conclusion that, because of this unpredictability, LLMs must have something like a will or a mind. Think about how we might respond to Agüera y Arcas’s claim that when we are interacting with one of these advanced algorithms, we tend to “automatically construct a simplified model of our interlocuter as a person,” and that the LLM also, reciprocally, “forms models of us.” This is very close to Dennett’s “intentional stance” put into practice. “Like a person,” the LLM can “surprise us” in these moments, says the Google VP, a capacity “necessary to support our impression of personhood.”31 Although he is not quite saying that they are persons, he is leaving that “impression” wide open.
But there are other, more plausible ways to think about what is happening in a scenario like this one, when an LLM like ChatGPT surprises us with its apparently sentient output. Our pattern-looking selves imagine deliberate reasons and project intentional behavior where there might only be the effects of unusual happenstance or even simple randomness. In such instances, “autonomy is confused with unpredictability of the result.”32 Random events can be surprising without us identifying the cause of those events as a person—or, at least, calling the cause something besides an “it.” Elena Esposito puts it this way: “What algorithms are reproducing is not the intelligence of people but the informativity of communication.” We are conflating the two things when we describe them interchangeably. That is, it is not that we are facing a brilliant computer that has learned to think like us. We are, rather, facing a tool that has been trained “to participate in communication,” enabling it “to react appropriately and generate information in their interaction with other participants.”33 It is a simulation, an “as if” of a conversation, not an actual dialogue. But likely because conversation is so formative, fundamental, and predictable for human beings—so intrinsic to our social selves—even simulations of it can seem real enough and lead us to attribute intentionality to the algorithms generating it.
Consider a recent example of a social robot. “Ai-Da” was devised and financed in 2019 by gallerist Aidan Meller and curator Lucy Seal and built by Engineered Arts, “the UK’s leading designer and manufacturer of humanoid entertainment robots.”34 Ai-Da is “the world’s first ultra-realist robot artist,” according to the robot’s website. “She is a performance artist, designer, and poet,” able to “captivate audiences with her unique blend of art, technology, and trans-humanism.” These descriptions present Ai-Da as a feminine, autonomous creator intentionally operating in the art world, following in the familiar model of the avant-garde genius, one ahead of “her” time. Supporting this account, the website presents the robot’s exhibitions, speeches, artistic displays, and press briefings, whether at the Oxford Union, Parliament, 10 Downing Street, the Venice Biennale, or the UN. Moreover, the robot’s technological enhancements are presented as “her” creative growth: “Ai-Da’s new painting style” is the result of “her new robotic arm and AI algorithms.”35 The robot’s cameras, bionic hand, and algorithms are often programmed to generate portraits of celebrities, such as Queen Elizabeth and Paul McCartney, as well as “self-portraits” of “herself” while dressed in bohemian smocks and overalls.

Discussions of Ai-Da’s “self-portraits” have accepted this label of its productions and position the machine in a lineage of human self-portrait artists. Gabriella Giannachi observes, first, that contemporary human artists such as Irene Fenara and Jonathan Yeo use video and surveillance technology to create self-portraits, before segueing into a discussion of Ai-Da as a “humanoid AI robot-performance artist … capable of drawing people.” The robot “created a series of self-portraits by looking into a mirror with her camera eyes.” While acknowledging that Ai-Da has a creator programming these performances, Giannachi explains that the questions it raises are about control and selfhood, with the robot imagined as following on a continuum with earlier video artists.36 Giannachi is thus positioning Ai-Da as resembling a Westworld character in the flesh, like Bernard faced with his own blueprints and querying his maker and his meaning.

Yet, all of these terms used to describe Ai-Da’s movements are deeply misleading and require a broad set of premises about Ai-Da’s supposed personhood, chief of which is that Ai-Da is a contemporary performance artist, adept in the history of Western art, who can draw passably and is capable of making art by looking into a mirror with “camera-eyes.” The question not asked is whether Ai-Da actually has the capacity—let alone awareness, consciousness, or intention—to do any of these things attributed to “her.” To obscure these challenges, scenes of Ai-Da presented as “drawing” or “painting” borrow heavily from the long history of portraiture, as well as representations of self-portraiture, and rely on that backstory to fill out “her” persona. Promotional photos of Ai-Da present the robot “as a lonely figure in a Victorian atelier, surrounded by its drawings and paintings,” a setting that adds “to the myth of a lone genius artist.”37 Even the robot’s outfits fit the part. And in some of these photographs, the viewer is positioned behind the robot’s head, facing a mirror, as if we could be looking from “her eyes” and seeing from “her” imagined subjectivity. It is a mode of point-of-view framing common not in painterly self-portraiture but in film and television of people absorbed in their activities, and so we envision Ai-Da in a similar situation.
The publicity performances and presentations of robots like Ai-Da, as well as advanced LLMs, have prompted AI theorist Seibt to rightly assert that “social robotics research has a description problem.” By that she means that when we are talking about these types of AI, and particularly when we are trying to get a handle on what happens when people interact with AI like Sophia or programs like ChatGPT, we tend to use “common verbs for human capacities” like “‘ask’, ‘answer’, ‘greet’, ‘remind’, ‘recognize’, ‘guide’, ‘teach’, ‘observe’, etc.” In discussions of Ai-Da, we might add that terms frequently include words like “look,” “create,” “draw,” “paint,” “design,” and “write.” Relying on this kind of vocabulary is not a benign phenomenon, argues Seibt, because each word inherently implies an essential human trait like feeling or intentionality. Using these terms confuses us about what is actually happening in these interactions or what we are really observing in a photograph of Ai-Da supposedly gazing at herself and drawing a self-portrait. For Seibt, even designating robots as “working” is problematic because they cannot experience the phenomenon of laboring the way we do (with all the satisfaction, annoyance, and potential exploitation it entails), and so all the issues intrinsically connected with work are left out. She sums up: “responsible robotics and AI begins at the linguistic level” because this language seeps into all our other thinking about AI.38 From Seibt’s perspective, we have not even begun to address this problem.
Perhaps calling the issues with social robotics “a description problem” sounds like downplaying its seriousness, as if once the question of what to call Ai-Da’s “drawings” is worked out, everything else will fall into place, and our conceptions of AI will be corrected for good. But the point of Seibt’s critique about our AI talk, and the point I am also developing here, is that these misunderstandings and incorrect judgments happen early, carelessly, and often. They are not easy to fix. One of the most brilliant philosophers of AI, John Haugeland, saw aspects of this tendency as a result of our “human chauvinism,” a sort of innate, self-centric prejudice that was simply “built into our very concept of intelligence.” Even when we apply the notion of astuteness to other kinds of entities, we cannot peel it away from its human source because we have no other source; we lack any other concept with which we could possibly replace “human intelligence.” “[I]f we escaped our [anthropomorphic] ‘prejudice,’ we wouldn’t know what we were talking about.”39 When we use the language of human cognition and “intelligence” to describe what we imagine is happening with advanced AI, it seems to be the closest approximation we could make. We do so even though some recent AI experts have seriously questioned the structural similarities between our brains and artificial neural networks and suspect that the entire “rhetoric of anthropomorphism” applied to this topic is fundamentally incorrect and “can do more harm than good.”40
More broadly, the deceptive language of robotics is really a symptom of the tremendous, still unbridgeable gap between human life and our AI reality. Haugeland puts it this way: “Trying to explain thought and reason in cybernetic terms is as hopeless and misguided as trying to explain it in terms of conditioned reflexes or Hume’s gravity-like association of ideas.”41 In other words, it is like trying to use “numbers or vectors” (today we would say “complex algorithms”) to describe “gardening”—to describe not only what the activity called “gardening” is and what it means anthropologically to our society now but also why any individual person does it, how it is fulfills their life, and so on. No doubt there are algorithms that could capture realistic aspects of “gardening,” but they could not capture this filled-out meaning of gardening.
We imagine, because so much science fiction has shown us this path, that ChatGPT and Ai-Da are approximating us more and more with every technological upgrade. That fantasy was the basic plot of Westworld (along with most Black Mirror episodes). But the reality is that the obvious, daily failures and fragilities of ChatGPT and Ai-Da are an entirely predictable part of what they are. How often, each day, does Siri fail us? We do realize, all the time, all the ways that these algorithms are very stupid (or, more accurately, “stupid”). Our more likely reality is that even the most basic living organisms communicate in ways exponentially more complex than any of our AI creations can mimic, and we still do not fully understand the mechanisms of their cell signaling. Yet, unlike a single-celled organism, “what the machine becomes,” and that it becomes anything at all, is “absolutely up to humans.”42
Adapted from The Problem of Personhood: Giving Rights to Corporations, Trees, and Robots (Verso, 2026).
Notes
On February 16, 2017, the European Parliament passed a resolution, “Civil Law Rules on Robotics,” declaring that “the autonomy of robots raises the question of their nature in the light of the existing legal categories or whether a new category should be created, with its own specific features and implications.”1 In other words, perhaps robots (now apparently possessing “autonomy”—more on that dubious claim below) should be placed in some new kind of legal category that specially regards their unique features and characteristics. The European Parliament explained what such a “category” might be, recommending that “all possible legal solutions” be considered. These include:
creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.
This sentence, the potential possession of a “legal status for robots” or “electronic personality” for “electronic persons,” immediately raised alarms. Although so-called legal electronic personhood was being invoked as a way to assign liability and redress the potential harms that “electronic persons” might cause to human persons, hundreds of artificial intelligence (AI) and robotics experts were worried. They signed an open letter condemning the resolution on technical, ethical, and legal grounds. “[C]reating a legal personality for a robot is inappropriate whatever the legal status model” that was applied.2 The controversy led to a small industry of think-pieces and academic articles evaluating electronic legal personhood, typically (but certainly not always) negatively, and the idea of such “electronic personhood” was dropped from subsequent EU proposals.3
But ePersonhood—legal personhood for AI—has not gone away. One of the recent arguments on its behalf is that something strikingly different is happening now regarding AI-human communication and interaction, accelerated with the roll-out of powerful, advanced, large language model algorithms (LLMs) such as ChatGPT and Claude. Perhaps the strange, auto-generated forms of writing from these LLMs have altered the way language and meaning work. Or perhaps, others have proposed, LLMs are spontaneously developing what has been termed “theory of mind”—a supposedly fundamental ability humans develop in childhood that enables us to understand why another person is acting by reflecting on, or assuming something about, how they are thinking.4 Or, alternatively, maybe LLMs are not developing language or theory of mind at all. Skeptical critics proposing this view see the evidence of LLMs’ minds as an unintended consequence of the tests, necessarily based on humans, used to evaluate theory of mind.5
Whatever exactly the LLMs are or are not doing to language, even those who resist ePersonhood predict that emerging AI challenges us to change some of the basic concepts we invoke to talk about this new technology. Elena Esposito puts the dilemma this way: “Is what happens in the interaction with algorithms on the web ‘communication,’ or do we need to modify the concept?” The problem, suggests Esposito, is that in these kinds of daily interactions, “one communication partner is an algorithm that does not understand content, meaning, or interpretation.”6 Whether we should perceive and treat a chatbot or another algorithm as a “partner” with whom we are communicating might be the more basic question. On that point, Simone Natale has proposed that a type of “banal deception” is already intrinsic to the way AI is conceived and is foundational to how we interact with it.7 Both Natale and Esposito point to the way AI seems to demand something new from us: that we should think about our interactions with it differently than we are and differently from how we interact with one another.
Yet, in many ways, these discussions of how ePersonhood works are not very new or different at all. They ride effortlessly on the formal track for claims of expansive legal personhood that appear for every kind of potential entity, whether corporations, fetuses, nonhuman animals, or trees. Arguments for AI rights are understood to follow a longstanding process (“another step in the corrective evolution of our legal systems”); they are envisioned as a progressive expansion toward more legal rights not only for human beings but also for nonhuman entities.8 Sometimes these arguments invoke equality arguments, as in the case of the emancipation of enslaved persons. Lawrence Solum, in an important early legal article on the topic, intuitively senses that rejecting AI personhood feels “akin to American slave owners saying that slaves could not have constitutional rights simply because they were not white or simply because it was not in the interests of whites to give them rights.”9 He ultimately rejects this comparison, but his moral discomfort lingers. Samir Chopra and Laurence White take a stronger stance against such discrimination against AI in their envisioned future, doubting the benefits of denying legal personhood to autonomous AI agents: “At best it would be a chauvinistic preservation of a special status for biological creatures like us.”10 Their point is that a bias toward human wetware over machine hardware and software would be an unjust result.
In making these arguments, legal theorists such as Solum tend to point either to Christopher Stone’s 1972 article supporting the legal rights for natural objects, such as trees and rivers, or to the legal standing of corporate persons, as do Chopra and White. But the claims of trees as legal persons, and corporations as such, are aligned. Stone’s argument on behalf of “the future of the planet as we know it” relies on corporate and other forms of artificial personhood for its basic premises.11 He also refers to computers in this connection decades before other scholars.12 Solum explicitly embraces Stone’s framing question as his own when asking, “Could an artificial intelligence serve as a trustee?” And, like Stone, he locates corporate personhood as a potential justification for giving legal standing rights to AI (ultimately, Solum rejects the analogy, since corporations still seem to him to require human beings’ rights, privileges, and property).13
But there is an important, even fundamental, difference between Stone’s argument for legal standing for natural objects, like trees and rivers, and the recent considerations of ePersonhood. Stone’s aim is to deploy law to “contribute to a change in popular consciousness,” a version of the social change that Dr. Seuss/Theodor Geisel sought when writing The Lorax. In presenting the long history of moral and legal development, Stone’s point is that the law’s notion of who can hold rights is flexible. It has been evolving since its beginnings in Roman law and has never been offered to all or only human beings. The lawyer’s world “is peopled with inanimate right-holders” like trusts, corporations, nation-states, and ships.14 His presentation of the argument in terms of right-holders and standing, rather than the human or moral qualities of personhood, makes clear his view that legal personhood is a construction, a concept that works for the legal system and the individuals (and values, such as capital and economic development) it serves. The reason to give legal rights to a stream, for Stone, was to make a first step toward transforming how a community understands itself and its values in relation to the environment.
Now, contrast Stone’s arguments with that of recent scholars who treat the AI-corporation alignment more formally. Unlike Stone, they tend to see ePersonhood and corporate personhood as inherently, even inextricably, conjoined. Carla Reyes treats the two kinds of legal claims as interconnected. If laws were to change for “AI personhood,” then the legal norms of corporate personhood would too, in a kind of lockstep pattern.15 Chopra and White put the point more strongly and normatively. Corporate personhood logically sanctions arguments on behalf of AI. If our law provides legal personhood to children, disabled adults, ships, and corporations, then “there is nothing to prevent” law from providing a similar form of legal personhood to “artificial agents.”16 Other scholars suggest that the law has already reached that point. Limited liability corporations (LLCs) and other contemporary business forms seem “flexible enough” to provide legal status for computer programs and robots.17
The space between Stone’s older argument and these new ones looks small but generates consequential differences. Legal standing is a construction of and for the efficient workings of the court system; it is what allows you to have your grievance addressed without having first to justify that the court should listen to your complaint. Stone is suggesting that this constructed quality is key to how we might transform law for the better. People (lawyers, judges, activists) ought to be able to deploy legal standing for rivers, lakes, and so on in order to put their environmental values into practice. In contrast, recent arguments for AI legal personhood function quite differently. These look more like claims about how human beings actually interact with AI and, consequently, how AI should be treated by us. Ryan Abbott, for example, does not advocate for ePersonhood. But he does propose “that as AI increasingly occupies roles once reserved for people, AI will need to be treated more like people, and sometimes people will need to be treated more like AI.”18 This is a “need” that, for Abbott, derives from a functional characteristic of AI-human dealings. A phenomenon occurring in the world determines what “will need” to happen in law.
Or consider the work of Anna Beckers and Gunther Teubner, who see the emergence of a human-AI combinatory person, a sort of digital hybrid that (apparently) can act collectively, as an evolving yet undeniable reality for the law. When humans use software algorithms to, say, finish emails, sell stocks, or drive cars, “[t]he ability of non-humans to act is drastically expanded”; these “algorithms can participate (at least indirectly) in political negotiations, economic transactions, and legal contracting.” In the context of such “human-algorithm associations,” the action we attribute to the algorithm “constitutes it as a person.”19 For these scholars, what matters when determining personhood is an attribution or reflection of a behavioral reality: how we engage and work with AI and what attitudes or positions we, human beings and the law both, take toward algorithms. What matters, in other words, is not whether or not we are intending to give AI legal standing for some further aim or end but how we are functioning with them now. If we are positioned toward them as if they were entities that intend to act in the world, then they ought to be treated as such.
What matters, in other words, is something like our “intentional stance” toward them. That phrase is the philosopher Daniel Dennett’s, who coined it in the 1970s to capture how our ways of predicting things in the world change when we are dealing not just with natural or mechanical objects but also with sophisticated tools—such as computers—that are deliberately fashioned by humans. Dennett is thinking particularly about the attitude one takes in order to win while playing against a chess computer program—specifically, when you ascribe to that program rationality, predictability, and goal-oriented behavior because it has been designed (again, by some human being) to thrash you at chess. In taking an intentional stance toward a computer, says Dennett, we are not saying “that intentional systems really have beliefs and desires, but that one can explain and predict their behavior by ascribing beliefs and desires to them.”20 As he explains in a later book, you are assuming that the computer is “not an idiotic, self-destructive chess player,” but a good enough one, and so “[y]ou treat it … as if it were a human being with a mind,” which means you anticipate and try to understand its moves. That is the key to Dennett’s intentional stance: a deliberate “as if” for practical purposes. It is a perspectival and attitudinal position that ascribes something like human mind to the computer, designed as a strategy to interpret “the behavior of an entity (person, animal, artifact, or whatever).”21
But Dennett’s “as if” can be a bit slippery and ambiguous. At times, it seems as if he is sliding between, on the one hand, treating a computer as if it has intentions because that is a useful pretense (a sort of convenient game) and, on the other, declaring that “some computers undeniably are intentional systems” because that pretense has been empirically shown to work in real-life situations. Both claims are in use, although it’s not always clear which one is meant. He observes, for example, how “interesting” it is that “to see just how much of what we hold to be the case about persons or their minds follows directly from their being intentional systems.” These intentional systems turn out to be the larger category that human persons—usually—are part of, and we are the sorts of intentional systems that can communicate using language. Yet in “extreme cases,” such as “the insane,” we might abandon the intentional stance toward a human being entirely—while presumably maintaining it toward our MacBook.22 In terms of their status as intentional systems, then, human beings would seem to be on a fluctuating continuum with computers. Both exhibit shifting levels of functional intentionality, with your MacBook edging out ahead if a human being is mentally ill and acting erratically enough.
The philosopher Jennifer Hornsby observes some of the problems emerging here. The point for Dennett of looking at both computers and human beings as intentional systems is that it permits Dennett to retain “the standpoint of the physical sciences.” When you look at human beings as intentional systems, as Dennett does, they can be imagined as just another one of those systems that you can treat as predictable (like thermostats or calculators). But the obvious reason it would even make sense to take an intentional stance toward a computer is being lost here. We take the intentional stance because these objects were intended, and intended to be used, by human beings: “these are persons’ artifacts.”23 Without persons somewhere in the mix, a thermostat’s intentionality as an artifact is meaningless.
Nonetheless, Dennett’s slipperiness about the “as if” emerges in discussions of ePersonhood. Solum adapts parts of Dennett’s position to think through his argument for legal personhood for AI. He agrees with Dennett that we might very well have good practical reasons “to take the intentional stance toward Als that we encountered in our daily lives.” And he also enlarges on this point. It is no great leap “to extend this way of talking about Als in general to the particular AI that was claiming the rights of constitutional personhood.” In other words, Solum suggests that taking the “intentional stance” toward computers could very well lead to compelling legal claims for ePersonhood. Since he was writing this essay in the 1990s, Solum also predicts that judges and juries would be skeptical about such claims, in the 1990s, anyway. Presumably that’s because their envisioned AI models resembled thermostats or calculators, rather than generative chatbots or self-driving Teslas. But Solum allows that “if interaction with Als exhibiting symptoms of complex intentionality (of a human quality) were an everyday occurrence, the presumption might be overcome.”24 Constitutional personhood for AI would be on the table as a viable argument.
Legal theorists Chopra and White also invoke Dennett’s “as if” perspective to justify ascribing agency to AI. When we imagine or take a stance toward a computer application and say to ourselves that “the bot wanted to find me a good bargain,” we are “adopt[ing] the intentional stance” toward this algorithm, what they term an “artificial agent.” Such artificial agents could and should be considered “intentional agents” if treating them this way “leads to the best interpretation and prediction of its behavior.” In other words, for them, Dennett’s philosophical thought experiment justifies interpreting algorithms as intentional actors in a legal system. They propose that the legal notion of a relationship between principal and agent is at work in these scenarios. Algorithms can be understood as agents acting on and for, essentially, their bosses (their principals) with duties and obligations to those principals.25 Again, this is where we can see both versions of Dennett’s “as if” in play: bots are intentional agents as pretense, but their agency is also a provable reality. While the agency for AI that Chopra and White promote is not exactly identical to an argument for AI legal personhood, their strong notion of agency is undeniably a necessary building block for such personhood. Their immediate pivot to business corporations (“as subjects of the intentional stance”) underscores that point.
It is worth emphasizing just how radical this line of argument has become. (More recently, even Dennett himself came around to the dangers of what he calls “counterfeit digital people,” even suggesting that the companies that create them should be punished with harsh sanctions.)26 Contemporary theorists of ePersonhood and its various forms take a position on AI’s supposed capacity to intend and act in order to determine how to think about what AI fundamentally is. From there, they make an argument about how society is obliged, morally or legally, to treat it. This is a completely different kind of argument from Stone’s defense of standing for trees on the basis of how a legal system reflects our own values. Fundamentally, this is the difference between thinking of AI as a potentially useful instrument and property that enables you to realize your values and intentions in the world versus AI as a potential collaborator and legal person with its own will and intentions you need to respect. Which is true? A decision has to be made. As Mary Midgley, a philosopher still not read and appreciated nearly enough, once observed on precisely this point, “It is not possible to treat something as both a tool and a colleague.”27
* * *
By considering various sources for, and problems with, the tempting equivalence of AI imagined on the model of human persons, we begin to see the problems with the comparison. Others have noticed problems, too, and have recommendations. Nadia Banteka proposes that law should resist its tendency to analogize when faced with AI entities and “resort to empirical analysis instead.”28 The idea of the intelligent machine has also seemed a better term for the capacities and situation of AI. And yet these personhood analogies remain incredibly tempting. It seems hard to resist projecting a human-like intelligence onto the queries of ChatGPT, despite the evident problems it raises. It seems that there is more to diagnose about this situation.
The persistence of this phenomenon depends on a kind of “personhood” illusion or self-deception occurring within our language itself. That is, the mere act of describing AI often seems to produce the phenomenon of a human-like personhood. It is an illusion so foundational to our thinking and experiencing that it is difficult to perceive it at work. When Wittgenstein writes of the “constant surprise at the new tricks language plays on us when we get into a new field,” he alerts us to this tendency.29 To see these tricks in action, consider responses to the recent unveiling of ChatGPT and other LLMs. Blaise Agüera y Arcas, a Vice President at Google who led the machine intelligence effort there, writes that these new algorithms are indeed “bullshitting” because we have asked them to. Yet, he claims, they are also “learning a great deal of embodied knowledge” through Wikipedia, Reddit, and online material they are trained on, all written by embodied humans. His idea is that LLMs are (somehow) “learning” human beings’ “embodied knowledge” because such knowledge hitches a ride on a Wikipedia page and is caught in our language, whether we like it or not. One might wonder: What could such “embodied knowledge” mean? How does an LLM learn by feeling and doing? At any rate, an advanced LLM, he continues, “also forms models of us. And models of our models of it. If, indeed, it is the right pronoun.”30 What Agüera y Arcas assumes here is that LLMs could, in fact, “bullshit” when prompted, as well as “learn,” “form,” “hallucinate,” “model,”—and “model us”—all terms which further imply the possibility of knowing and distinguishing truth from lies, reality from fiction, data from abstract representations, and so on.
Increasingly, skeptics of this account of AI have pushed back on such humanizing language and have tried in different ways to show how and where mistakes are emerging. They suggest that we are confusing the fact that LLMs can generate results that we cannot predict with the conclusion that, because of this unpredictability, LLMs must have something like a will or a mind. Think about how we might respond to Agüera y Arcas’s claim that when we are interacting with one of these advanced algorithms, we tend to “automatically construct a simplified model of our interlocuter as a person,” and that the LLM also, reciprocally, “forms models of us.” This is very close to Dennett’s “intentional stance” put into practice. “Like a person,” the LLM can “surprise us” in these moments, says the Google VP, a capacity “necessary to support our impression of personhood.”31 Although he is not quite saying that they are persons, he is leaving that “impression” wide open.
But there are other, more plausible ways to think about what is happening in a scenario like this one, when an LLM like ChatGPT surprises us with its apparently sentient output. Our pattern-looking selves imagine deliberate reasons and project intentional behavior where there might only be the effects of unusual happenstance or even simple randomness. In such instances, “autonomy is confused with unpredictability of the result.”32 Random events can be surprising without us identifying the cause of those events as a person—or, at least, calling the cause something besides an “it.” Elena Esposito puts it this way: “What algorithms are reproducing is not the intelligence of people but the informativity of communication.” We are conflating the two things when we describe them interchangeably. That is, it is not that we are facing a brilliant computer that has learned to think like us. We are, rather, facing a tool that has been trained “to participate in communication,” enabling it “to react appropriately and generate information in their interaction with other participants.”33 It is a simulation, an “as if” of a conversation, not an actual dialogue. But likely because conversation is so formative, fundamental, and predictable for human beings—so intrinsic to our social selves—even simulations of it can seem real enough and lead us to attribute intentionality to the algorithms generating it.
Consider a recent example of a social robot. “Ai-Da” was devised and financed in 2019 by gallerist Aidan Meller and curator Lucy Seal and built by Engineered Arts, “the UK’s leading designer and manufacturer of humanoid entertainment robots.”34 Ai-Da is “the world’s first ultra-realist robot artist,” according to the robot’s website. “She is a performance artist, designer, and poet,” able to “captivate audiences with her unique blend of art, technology, and trans-humanism.” These descriptions present Ai-Da as a feminine, autonomous creator intentionally operating in the art world, following in the familiar model of the avant-garde genius, one ahead of “her” time. Supporting this account, the website presents the robot’s exhibitions, speeches, artistic displays, and press briefings, whether at the Oxford Union, Parliament, 10 Downing Street, the Venice Biennale, or the UN. Moreover, the robot’s technological enhancements are presented as “her” creative growth: “Ai-Da’s new painting style” is the result of “her new robotic arm and AI algorithms.”35 The robot’s cameras, bionic hand, and algorithms are often programmed to generate portraits of celebrities, such as Queen Elizabeth and Paul McCartney, as well as “self-portraits” of “herself” while dressed in bohemian smocks and overalls.

Discussions of Ai-Da’s “self-portraits” have accepted this label of its productions and position the machine in a lineage of human self-portrait artists. Gabriella Giannachi observes, first, that contemporary human artists such as Irene Fenara and Jonathan Yeo use video and surveillance technology to create self-portraits, before segueing into a discussion of Ai-Da as a “humanoid AI robot-performance artist … capable of drawing people.” The robot “created a series of self-portraits by looking into a mirror with her camera eyes.” While acknowledging that Ai-Da has a creator programming these performances, Giannachi explains that the questions it raises are about control and selfhood, with the robot imagined as following on a continuum with earlier video artists.36 Giannachi is thus positioning Ai-Da as resembling a Westworld character in the flesh, like Bernard faced with his own blueprints and querying his maker and his meaning.

Yet, all of these terms used to describe Ai-Da’s movements are deeply misleading and require a broad set of premises about Ai-Da’s supposed personhood, chief of which is that Ai-Da is a contemporary performance artist, adept in the history of Western art, who can draw passably and is capable of making art by looking into a mirror with “camera-eyes.” The question not asked is whether Ai-Da actually has the capacity—let alone awareness, consciousness, or intention—to do any of these things attributed to “her.” To obscure these challenges, scenes of Ai-Da presented as “drawing” or “painting” borrow heavily from the long history of portraiture, as well as representations of self-portraiture, and rely on that backstory to fill out “her” persona. Promotional photos of Ai-Da present the robot “as a lonely figure in a Victorian atelier, surrounded by its drawings and paintings,” a setting that adds “to the myth of a lone genius artist.”37 Even the robot’s outfits fit the part. And in some of these photographs, the viewer is positioned behind the robot’s head, facing a mirror, as if we could be looking from “her eyes” and seeing from “her” imagined subjectivity. It is a mode of point-of-view framing common not in painterly self-portraiture but in film and television of people absorbed in their activities, and so we envision Ai-Da in a similar situation.
The publicity performances and presentations of robots like Ai-Da, as well as advanced LLMs, have prompted AI theorist Seibt to rightly assert that “social robotics research has a description problem.” By that she means that when we are talking about these types of AI, and particularly when we are trying to get a handle on what happens when people interact with AI like Sophia or programs like ChatGPT, we tend to use “common verbs for human capacities” like “‘ask’, ‘answer’, ‘greet’, ‘remind’, ‘recognize’, ‘guide’, ‘teach’, ‘observe’, etc.” In discussions of Ai-Da, we might add that terms frequently include words like “look,” “create,” “draw,” “paint,” “design,” and “write.” Relying on this kind of vocabulary is not a benign phenomenon, argues Seibt, because each word inherently implies an essential human trait like feeling or intentionality. Using these terms confuses us about what is actually happening in these interactions or what we are really observing in a photograph of Ai-Da supposedly gazing at herself and drawing a self-portrait. For Seibt, even designating robots as “working” is problematic because they cannot experience the phenomenon of laboring the way we do (with all the satisfaction, annoyance, and potential exploitation it entails), and so all the issues intrinsically connected with work are left out. She sums up: “responsible robotics and AI begins at the linguistic level” because this language seeps into all our other thinking about AI.38 From Seibt’s perspective, we have not even begun to address this problem.
Perhaps calling the issues with social robotics “a description problem” sounds like downplaying its seriousness, as if once the question of what to call Ai-Da’s “drawings” is worked out, everything else will fall into place, and our conceptions of AI will be corrected for good. But the point of Seibt’s critique about our AI talk, and the point I am also developing here, is that these misunderstandings and incorrect judgments happen early, carelessly, and often. They are not easy to fix. One of the most brilliant philosophers of AI, John Haugeland, saw aspects of this tendency as a result of our “human chauvinism,” a sort of innate, self-centric prejudice that was simply “built into our very concept of intelligence.” Even when we apply the notion of astuteness to other kinds of entities, we cannot peel it away from its human source because we have no other source; we lack any other concept with which we could possibly replace “human intelligence.” “[I]f we escaped our [anthropomorphic] ‘prejudice,’ we wouldn’t know what we were talking about.”39 When we use the language of human cognition and “intelligence” to describe what we imagine is happening with advanced AI, it seems to be the closest approximation we could make. We do so even though some recent AI experts have seriously questioned the structural similarities between our brains and artificial neural networks and suspect that the entire “rhetoric of anthropomorphism” applied to this topic is fundamentally incorrect and “can do more harm than good.”40
More broadly, the deceptive language of robotics is really a symptom of the tremendous, still unbridgeable gap between human life and our AI reality. Haugeland puts it this way: “Trying to explain thought and reason in cybernetic terms is as hopeless and misguided as trying to explain it in terms of conditioned reflexes or Hume’s gravity-like association of ideas.”41 In other words, it is like trying to use “numbers or vectors” (today we would say “complex algorithms”) to describe “gardening”—to describe not only what the activity called “gardening” is and what it means anthropologically to our society now but also why any individual person does it, how it is fulfills their life, and so on. No doubt there are algorithms that could capture realistic aspects of “gardening,” but they could not capture this filled-out meaning of gardening.
We imagine, because so much science fiction has shown us this path, that ChatGPT and Ai-Da are approximating us more and more with every technological upgrade. That fantasy was the basic plot of Westworld (along with most Black Mirror episodes). But the reality is that the obvious, daily failures and fragilities of ChatGPT and Ai-Da are an entirely predictable part of what they are. How often, each day, does Siri fail us? We do realize, all the time, all the ways that these algorithms are very stupid (or, more accurately, “stupid”). Our more likely reality is that even the most basic living organisms communicate in ways exponentially more complex than any of our AI creations can mimic, and we still do not fully understand the mechanisms of their cell signaling. Yet, unlike a single-celled organism, “what the machine becomes,” and that it becomes anything at all, is “absolutely up to humans.”42
Adapted from The Problem of Personhood: Giving Rights to Corporations, Trees, and Robots (Verso, 2026).
Notes