Skip to content Skip to sidebar Skip to footer

Against Theory, now with bots! On the Persistent Fallacy of Intentionless Speech

If Siri responded to your questions with QAnon conspiracy theories, would you want her answers to be legally protected? Would your verdict change if we labeled Siri’s answers either “computer generated” or “meaningful language?” Or as legal scholars Ronald Collins and David Skover ask in their recent monograph, Robotica: Speech Rights and Artificial Intelligence (2018), should the “constitutional conception of speech” be extended “to the semi-autonomous creation and delivery of robotic speech?”1 By “robotic speech,” they don’t mean some imagined language dreamed up in science fiction but the more ordinary phenomenon of “algorithmic output of computers”: the results of Google searches, instructions by GPS navigational devices, tweets by corporate bots, or responses by Amazon’s Alexa to a query about tomorrow’s weather. And by “the constitutional conception of speech” they are invoking the First Amendment’s fundamental prohibition declaring that “Congress shall make no law … abridging the freedom of speech, or of the press.”2 Collins and Skover deliver their verdict: the U.S. Constitution should recognize and protect so-called “robotic expression,” the computer-generated language of your iPhone or like devices (40).

Free speech protections have evolved into an extremely powerful legal, political, and competitive market tool, even more so in our era of social media. Former President Trump’s lawyers used dubious free speech arguments as a defense against the charge of insurrection at his second impeachment trial. In his lawyer Michael van der Veen’s words, “There is no doubt Mr. Trump engaged in constitutionally protected speech that the House has improperly characterized as incitement of insurrection.”3 Trump was acquitted of course. And there are obvious financial and competitive reasons why corporations like Apple and Amazon would agree with Collins and Skover’s argument, and also why Google would commission a white paper by libertarian legal scholars Eugene Volokh and Donald M. Falk to make a version of this point.4 If Google’s products are considered not only commodities but also bearers of free speech, then Google would instantly liberate those commodities from potential governmental regulation and control, giving them much freer rein and more competitive power in the marketplace—and, increasingly, in political discourse. Volokh has recently expanded on the free speech claim to argue that decisions handed down by AI judges would be perfectly acceptable.5 Falk spends his time more lucratively, quashing class action lawsuits and defending corporations facing anti-trust litigation.6

But more unexpected is Collins and Skover’s approach. Rather than justifying their defense of “robotic expression” (free speech rights for algorithms) primarily with legal precedent or theory—both of which other legal scholars have done—their basic premise is literary theoretical and interdisciplinary.7 Specifically, to argue for the First Amendment rights of computer content, Collins and Skover adapt Reader Response literary criticism from the 1970s, as well as related debates about literary meaning from the 1980s, to develop an idea they call “intentionless free speech” (40). As they explain it, the current legal debate over robotic free speech “significantly mirrors yesterday’s debate among schools of literary theory over textual interpretation and the reader’s experience,” yet “the importance of the lessons from reader-response criticism and reception theory” have gone unrecognized in legal scholarship (41–42). They summarize how the decades’ past criticism of Stanley Fish, Norman Holland, Wolfgang Iser, and Hans Robert Jauss reveals that the “real existence” of a text is imparted by the reader, not by the intention of the author (38). For them that means that your iPhone’s or Amazon Echo’s lack of an intention should not bar a court finding its “message” to be meaningful because the iPhone’s owner makes those messages mean. “Meaning resides in the receiver of information,” they write, thus the receiver’s use of that information is the ultimate determinant of an expression’s value (45). Their “theory of ‘intentionless free speech’ is solidly grounded in those lessons” of reader-response criticism (42). As Collins and Skover write, “the receiver’s experience of speech is perceived as an essential dimension of the constitutional significance of speech, whether human or not, whether intended or intentionless” (45).

The concept of “intentionless free speech,” which I’ll discuss at more length below, is obviously key to their argument, and the phrase’s incoherence—not to mention the potential damage it could cause from attorneys using it—prompts my challenge here. To think that intention can be meaningfully severed from free speech is to fundamentally misunderstand both speech and language. Siri and Alexa are advanced computer programs (algorithms) that are also advantageous components of the commodities that the corporations Apple and Amazon are selling. Those programs might function in ways that are useful to us, like any other commodity. But that obvious and productive functionality is neither equivalent to language nor a reason to permit algorithms to benefit from free speech protections. Scholars such as Tim Wu, Oren Bracha, and Frank Pasquale have skillfully disputed the legal arguments supporting algorithmic free speech.8 But the literary theoretical claims at work in Collins and Skover’s account have gone undiscussed—even though such arguments frame the issues involved better than the legal arguments. Moreover, as I argue below, this debate has progressed little since the moment it was mapped out by Steven Knapp and Walter Benn Michaels’s “Against Theory” (1982), even as bots and their intentionless speech continue to gain powerful advocates in both legal and literary studies.

Collins and Skover’s Reader Response legal theorizing works as follows. First Amendment protections for robotic speech expressions should be “grounded in the value of the information that they generate for their receivers” (40), namely, the iPhone user or the Amazon Echo listener, because what really matters in this inquiry is that “the receiver experiences robotic speech as meaningful and potentially useful or valuable” (42). In referring to use and value, they are following some basic First Amendment precedents in cases of nonobscene pornographic speech, which focus on the interpretive experience of “the average person.”9 If Siri answered every one of your queries by responding, repeatedly, “A group of Satan-worshipping elites who run a child sex ring are trying to control our politics and media,” and “the average person” found that response useful or valuable, then, according to their argument, Siri’s output should have free speech protections. Note that a NPR/Ipsos poll conducted at the end of December 2020 found that 17% of U.S. adult respondents agreed that the Satan-worshiping statement was true, 37% did not know enough to say (and so might find the statement “useful”), and a minority—47%—agreed that it was false.10 And, following Robotica arguments, your iPhone’s output should have those free speech protections whether or not “Siri” could be understood as meaning anything at all. If an algorithm randomly spewed words from hundreds of ancient, defunct languages that no one on Earth today could understand, and “the average person” found that experience useful or valuable, presumably that output too would be protected speech.

Although Collins and Skover acknowledge that robotic speech is the result of sophisticated formulas—“algorithmic output” (33)—they nonetheless aim to produce a First Amendment argument for computerized speech that “avoids all normative concerns about the legal personhood or autonomy of robots” (40). That means it would be irrelevant whether or not Siri “shares” the corporate personhood of Apple, Inc. and, like all corporate persons, would have recourse to free speech protections since (roughly) the 1970s. In this regard, Robotica is really part of a broader trend, beginning much earlier in the twentieth century, of extending free speech rights as well as many other rights and privileges to entities like corporations that previously were considered out of bounds for such protections.11 Most notoriously, the U.S. Supreme Court decision Citizens United v. Federal Election Commission (2010), equated money to free corporate speech by relying, in part, on the argument that corporations have the legal status of persons and money is their way of speaking. As many commentators have noted, the stakes of these developments are significant and disturbing. Citizens United led to the proliferation of campaign spending in elections by rendering campaign finance restrictions unconstitutional. Critics of Collins and Skover’s account warn that constitutional protections of “robot speech will inure disproportionately to the benefit of the powerful” or will dramatically erode civic discourse, even without granting algorithms legal personhood.12 But one cannot dispute that the courts and legal thinking are moving in the direction that Robotica advocates; moreover, the Reader Response literary theory relied on for that argument is now part of this jurisprudential story.

With that in mind, consider how Collins and Skover understand not only the Reader Response criticism they adopt but also the literary theoretical arguments they dispute, specifically, the claims of Knapp and Michaels in “Against Theory.” Explaining why they are convinced that “the reader is the situs of meaning,” Collins and Skover recount Knapp and Michaels’s “controversial ‘wave poem’ hypothetical” (38). They also slightly modify the example to better fit the legal free speech framework, a modification that attempts to mask the force of the “Against Theory” argument:

A stroller on the beach comes upon what she understands to be a peace symbol in the sand, this at a time of political unrest. As it turns out, the symbol is no more than the result of the silting of sand by ocean tides. At the moment of interpretation, however, does meaning hinge on whether a human or oceanic agent created the symbol? … Think of robotic speech as a somewhat comparable form of wave speech. (38)

Collins and Skover see their hypothetical as revealing that whichever particular “agent” creates the symbol—human, oceanic, or robotic—is immaterial because the identity of the author is not a factor in First Amendment analysis. What matters is that the law protects “the expressive meaning that is substantially (if not entirely) constituted in the minds and experiences of the ‘receiver’” (42). From the Robotica authors’ perspective, it does not matter that “a robot is not a human speaker” and “it should be irrelevant that a robot cannot fairly be characterized as having intentions” (42). The question posed by their (modified) hypothetical requires us to choose between two different “agents”: an intentional one (a human being) or a nonintentional one (the ocean or robot).

No peace sign appears in Knapp and Michaels’s original “Against Theory” hypothetical. Instead, as nonsite readers will recall, the beach walker comes across the first stanza of William Wordsworth’s “A Slumber Did My Spirit Seal.”13 Then, while the walker reads the stanza, a second wave washes up and impresses upon the sand the second verse of the poem. The point of Knapp and Michaels’s example is not to ask (as do Collins and Skover), “does meaning hinge on whether a human or oceanic agent created the symbol?” The point, rather, is that the different kinds of responses you can come up with to account for the phenomenon will only fall into two categories. Either you assume an agent who intends to mean or you assume an account of natural accidents as producing marks that look like words. That is, faced with these two stanzas, the beach stroller can come up with two possible, mutually exclusive kinds of explanations: “you will either be ascribing these marks to some agent capable of intentions (the living sea, the haunting Wordsworth, etc.), or you will count them as nonintentional effects of mechanical processes (erosion, percolation, etc.)” (16). In the first instance, you are in the realm of intention and meaning, albeit a very unlikely version of it. But in the second instance, the marks are really “natural accident[s]” that “merely seem to resemble words” and seem to resemble language and poetry (16). In this case they can only be understood as authorless, and thus as “accidental likenesses of language”—likenesses that are intentionless and, accordingly, meaningless (16). “For the nontheorist, the only question raised by the wave poem is not how to interpret but whether to interpret” (24).

By replacing Wordsworth’s lyrical ballad not with sentences or even with words but with a non-linguistic icon—a peace sign during a time of political unrest—the Robotica authors hope to make intentionless free speech seem a little more plausible. A peace sign does not function exactly like a letter or a word but more like a global symbol, seemingly free of most of the conventions of a particular language’s grammar and syntax. Presumably it’s easier to imagine its (unintentional) production in the world because creating it doesn’t require a competent user of a particular language’s grammar and syntax: it seems to float free of place and person. It’s also easier to imagine a peace sign rather than a full poem appearing through what Knapp and Michaels call a “natural accident,” and easier to imagine spotting a natural accident that looks like a peace sign and mistaking it for an intentionally-made mark. In this regard, Collins and Skover’s strategy is not unlike that of P. D. Juhl, in an example discussed by Knapp and Michaels. Juhl also considers instances such as “a ‘poem’ produced by chance,” such as one created by a computer (19).

But regardless of which signs are used in the hypothetical—peace sign or English poem—only one factor really matters here. If it is a sign or group of signs (whether word, symbol, icon, or smoke signal) created for the purpose of human communication, then it is not a result of natural accident but an expression of human intention. You might mistake one for the other, natural accident for intentional sign or vice versa, but readers’ tendencies to err don’t change what it is and why it was produced. And if, as in the case of erosion, it doesn’t make sense to ask why it was produced (in the sense not of a cause or evidence but of a justification), then no intention could exist at all.14 In that regard, we are already starting to see the crucial divergences between each pair’s interpretation of their respective hypotheticals. For Knapp and Michaels, the point of the wave poem example is to present, as basically as possible, the complete inescapability of seeking out intention if your aim is interpretating signs, which is to say, if you are trying to understand the meaning of a text. It “is not that there need be no gulf between intention and the meaning of its expression,” they explain. The point is that “there can be no gulf” (17). Thus Robotica embraces literary theory in more ways than one, for “the moment of imagining intentionless meaning constitutes the theoretical moment itself” (15).

From Collins and Skover’s perspective, in contrast, the critical issue that their wave peace signifier presents is a choice between two “agents”—an intentional one (“human”) and a nonintentional one (“oceanic” or “robotic”). Because, they claim, First Amendment jurisprudence doesn’t care about that distinction—more on that assertion below—neither should we. Yet their notion of an unintentional agent already begs the question. Agents act for or in place of another, both in ordinary language and in Anglo-American common and statutory law. And agency law requires both consent and intent—both by the principal and the agent—to establish principal-agent associations: “an agency relationship … is always consensual, and its creation is to be determined by the relations of the parties as they exist under their agreements or acts; the ultimate question is one of intent.”15 In other words, for the relevant action to be agential it must be consensual and intentional (in the strong, legal sense of intent), which is to say neither unconscious nor automatic. These fundamental requirements disallow any legal argument attempting to make agents out of unconscious or non-human entities that are constitutively unable to consent or intend. If the “action” of the “agent” is unintentional (“oceanic” or “robotic”) and cannot be consensual, then it is hard to see the point of calling it an action, as opposed to a natural event or an occurrence. And it’s really hard (i.e., impossible) to see how you could call it the action of an agent.16 Whatever else they claim, Collins and Skover must be surreptitiously smuggling in an unstated reliance on some notion of intention as soon as they begin using the notion of agency to describe the marks in question.17

In concert with the “Against Theory” position I’ve been recounting, Stanley Fish (in 2005) shuns his earlier Reader Response theory and throws in his lot with Knapp and Michaels, but with legal theory targets in view.18 Another variation on the wave hypothetical appears involving a rock formation and aimed at the late Justice Antonin Scalia’s notion of originalism. Fish’s rock at first seems to have the word “help” carved into it, but then those marks are revealed to be a natural consequence of erosion:

The moment you decide that nature caused the effect, you will have lost all interest in interpreting the formation, because you no longer believe that it has been produced intentionally, and therefore you no longer believe that it’s a word, a bearer of meaning.

It may look like a word—it may even seem to be more regularly formed as such than the scratchings of someone who is lost—but in the absence of the assumption that what you’re looking at is a vehicle of an intention, you will not regard it as language.19

We might add: even if “you will have lost all interest in interpreting the formation,” you might still find the rock formation interesting for reasons other than interpretation. Those other interests are preoccupying a larger and broader contingent of scholarship in literary studies such that they now have a meta-theory of their own (see: Sianne Ngai, Our Aesthetic Categories: Zany, Cute, Interesting). But if you’re truly committed to following those other interests you won’t be able to interpret the rock formation as language, even though (strangely) you might have to accept and acknowledge it as language first.20 As with Knapp and Michaels’s account, in this version of Fish—post-Reader-Response-Fish, call it Fish-2005—assumptions of intention are built into the recognition of a signifier as signifying a sign. Indeed, the conjecture of intention is precisely what makes the signifier meaningful as a sign—whether that intention is correctly identified, totally misinterpreted, remains uncertain, or utterly denied by readers.

You can see the force of these points by considering other recent literary theoretical critiques of “Against Theory” (along with Fish-2005’s embrace of it). Whereas Knapp, Michaels, and Fish-2005 assert that oceanic (or robotic) expression merely resembles words, Toril Moi claims—as do Collins and Skover—that these doppelgangers should be understood as words if someone sees them or uses them as such. “In the situation outlined in the parable,” she writes, “we immediately recognize the words in the sand as words, even as a poem.” Moi wonders how she is “supposed to turn them into marks … [t]hrough a willed forgetting, an instant self-blinding to the meaning of the word in the sand?”21 Like the Robotica authors, she cannot see why one would want to deny that the marks in the sand staring up at you are words if you “recognize” (or accept or acknowledge) them as words. Related arguments also have appeared in recent work of N. Katherine Hayles, although the “Against Theory” backstory goes unmentioned in her account. Asking whether computers can create meaning, she answers yes, they “are capable of meaning-making practices,” with some caveats.22 That is because, “as a relational operator, a sign creates possibilities for meaning making.”23 Hayles is reaching a similar theoretical position to Moi, Collins, and Skover by turning the sign itself into the agent (an “operator”) that “creates possibilities for meaning making” in the reader’s brain.24

But to reiterate the main point here: for Knapp and Michaels no envisioned transformation occurring in the reader’s brain—a transformation in which meaningful signs are experienced as meaningless marks—could determine whether or not those signs mean, or what they mean. The moment that Moi understands her options when faced with the wave poem to be either a “willed forgetting” of what she has just read, or an “instant self-blinding” to the meaning of the verses, she has already decided to see those words as intentional regardless of what she claims she is doing. That’s what it means to read them—rather than touch them, or admire them as nature’s creation, or take an fMRI scan of your brain while you stare at them. And while it might be true, as Hayles writes in her final paragraph, that we need “a clear-eyed view of how human umwelten [perceived worlds] overlap with and differ from the nonhuman others with whom we share the planet, including both biological organisms and computational media,” the perceptions of either humans, animals, or algorithms in and of those worlds are not what make a sign meaningful.25

The Robotica authors themselves observe (in a footnote) that “Against Theory” anticipated the robotic speech arguments, although not its free speech aspects (141–42). Write Knapp and Michaels:

Can computers speak? Arguments over this question reproduce exactly the terms of our example. Since computers are machines, the issue of whether they can speak seems to hinge on the possibility of intentionless language. But our example shows that there is no such thing as intentionless language; the only real issue is whether computers are capable of intentions. (17)

As they go on to explain, unlike the question of the possibility of intentionless language—which, to be clear, is not possible—the question of whether computers are capable of intentions would need to be empirically addressed. That doesn’t make it easy to answer. It was difficult to determine in 1982, and despite the nearly four decades and multiple technology revolutions between then and now, just as difficult to answer today—although it seems the answer is still no. Even well-known philosopher of AI Daniel Dennett, who has a more capacious account of agency than we saw in legal jurisprudence, argues that “AI in its current manifestations is parasitic on human intelligence. … These machines do not (yet) have the goals or strategies or capacities for self-criticism and innovation to permit them to transcend their databases by reflectively thinking about their own thinking and their own goals.”26 General agreement seems to be that we are still a few months away from the Singularity.

Citing the above lines about speaking computers from “Against Theory,” Collins and Skover also claim that “the remainder of [their] essay challenges Knapp’s and Michael’s [sic] thesis, at least insofar as [they, Collins and Skover] would extend First Amendment coverage to intentionless robotic expression that is invested with meaning and significance by its receivers” (142). Unfortunately, Collins and Skover’s “challenge” never quite emerges, since they don’t tackle the main point of “Against Theory” explicitly. Their claim, instead, is that “questions about authorial personhood or intentionality [are] largely irrelevant with regard to constitutional protection for robotic expression” (142). The precedential question of whether authorial personhood or intentionality has been relevant or not to constitutional free speech analysis simply is a very different kind of question than “Against Theory” asked and answered, although the confusion can be explained in part when Collins and Skover use the wave poem hypothetical to focus on the irrelevance of choosing between two different kinds of “agents.”

But other constitutional legal theorists—including Larry Alexander, Leslie Kendrick, Saikrishna Prakash, and Frederick Schauer—have explored questions of intention in free speech doctrine.27 Likely for this reason Collins and Skover shift their focus to the compelling work of Kendrick, who has been at the forefront of arguing for the inescapability of intention in First Amendment analysis.28 Unfortunately, legal precedent and scholarship on the First Amendment employs a technical and arguably deficient definition of intention, derived from criminal law’s conception of “intent,” that differs from the one with which nonsite readers are familiar, and many nonsite authors embrace. Typically that legal definition implies some kind of causality (although often a deficient or failed causality) and is expressed as “the speaker’s state of mind, or, as [her] essay will call it, the speaker’s intent,” or some similar formulation.29 Kendrick goes on to argue that because “it often seems wrong to hold speakers strictly liable” for the harms their speech causes, and because that intuition cannot be accounted for in other ways, “then speaker’s intent must matter for the protection of speech.”30

Kendrick’s argument is ingenious and compelling, and I’m not interested in quibbling with it here. But it’s nonetheless the case that the legal notion of intent that she’s working with, which is more closely related to a lay conception of motive, simply isn’t identical to the notion of intention that Knapp and Michaels equate with meaning.31 And by jumping from “Against Theory” on the wave hypothetical to Kendrick on the First Amendment, Robotica masks the divergent accounts of intention in play (38–39). For that reason, in a very real sense their account of the wave hypothetical is not addressing “Against Theory” at all. This difference allows Collins and Skover to challenge the relevance of intention by focusing on (among other things) the precedent of corporate speech and use value. “Is it not the case,” they ask rhetorically, “that often, if not generally, intention is deemed jurisprudentially irrelevant” in cases not involving torts or crimes? (39). They use as their example the free speech protections provided to commercial advertising, which are not “based on speaker’s intent but largely on the value of the speech from the viewer’s or listener’s perspective” (39–40).

One last time, to see the “value of the speech” as only from the reader’s perspective is just to say that you’re not interested in whether it is language at all, and thus cannot be interested in it as “speech.” But the fact that the linchpin of Robotica’s argument turns out to be the precedent of commercial advertising is entirely fitting, albeit beyond what I can develop here. Long before Citizens United and Robotica, late nineteenth and early twentieth-century jurisprudence and legal theory on corporations began to understand commercial speech fundamentally as money, and had to formulate a degraded theory of language to support it. Their degraded theory was committed to the materiality of signs—a commitment that became known, to literary scholars, as a fundamental premise of deconstruction or what Knapp and Michaels call “theory.” This is part of a much longer story, one I develop in Modernism and the Meaning of Corporate Persons.32 We are still living with legal scholars’ solutions from over a century ago, as they tried to resolve the difficulty in living with and interpreting the writing of collective corporate persons of which it was frequently difficult or impossible to figure out an intention. That Collins and Skover find so much support for their position in the literary theories of the 1980s is, from that perspective, just an unhappy accident.

Notes

Thanks to Rachel Watson, Todd Cronan, and Anna McKittrick for their helpful comments on earlier drafts of this essay.
1.  Ronald Collins and David Skover, Robotica: Speech Rights and Artificial Intelligence (Cambridge: Cambridge University Press, 2018), 33. Hereafter cited in the text followed by the page number.
2.  U.S. Constitution, amend. I.
3.  Mark Sherman, “Trump’s free speech impeachment defense open to dispute,” AP News (February 12, 2021), https://apnews.com/article/donald-trump-capitol-siege-us-supreme-court-impeachments-trump-impeachment-9ff9d8e88f58644aced8e26b4c05699c. See also Peter D. Keisler and Richard D. Bernstein, “Freedom of Speech Doesn’t Mean What Trump’s Lawyers Want It to Mean,” The Atlantic (February 8, 2021), https://www.theatlantic.com/ideas/archive/2021/02/first-amendment-no-defense-against-impeachment/617962/.
4.  Eugene Volokh and Donald M. Falk, “First Amendment Protection for Search Engine Results,” UCLA School of Law Research Paper No. 12-22 (April 20, 2012), 3 (Google-commissioned white paper).
5.  Once AI has sufficiently developed to make legal arguments, Volokh argues, then we should also accept AI judges who would be more cost-effective than human ones; Eugene Volokh, “Chief Justice Robots,” Duke Law Journal 68, no. 6 (March 2019): 1135–92.
6.  Falk’s professional webpage at Mayer and Brown states that “the American Lawyer recognized him as ‘California’s class action killer’”; https://www.appellate.net/lawyers/donald-m-falk/.
7.  See, for example, Stuart Minor Benjamin, who argues that “if we accept Supreme Court jurisprudence, the First Amendment encompasses a great swath of algorithm-based decisions—specifically, algorithm-based outputs that entail a substantive communication”; “Algorithms and Speech,” University of Pennsylvania Law Review 161, no. 6 (May 2013): 1447. Toni M. Massaro, Helen Norton, and Margot E. Kaminski make a similar point, observing that “First Amendment law increasingly focuses not on protecting speakers as speakers but instead on providing value to listeners and constraining the government,” a situation supporting “the extension of free speech rights to strong AI speakers (if such speakers ever come to exist)”; “Siri-ously 2.0: What Artificial Intelligence Reveals about the First Amendment,” Minnesota Law Review 101, no. 6 (June 2017): 2482, 2485.
8.  See Oren Bracha, Frank Pasquale, and Tim Wu on problems with the notion of algorithmic free speech. Oren Bracha and Frank Pasquale, “Federal Search Commission? Access, Fairness, and Accountability in the Law of Search,” Cornell Law Review 93, no. 6 (Sept. 2008): 1149–1210; Tim Wu, “Machine Speech,” University of Pennsylvania Law Review 161, no. 6 (2013): 1495–1533. On legal intentionalism, see Larry Alexander and Saikrishna Prakash: “Our simple point is that one cannot look at the marks on a page and understand those marks to be a text (that is, a meaningful writing) without assuming that an author made those marks intending to convey a meaning by them”; “‘Is That English You’re Speaking?’ Why Intention Free Interpretation is an Impossibility,” San Diego Law Review 41, no. 3 (Aug.–Sept. 2004): 976. Note too Bruce E.H. Johnson’s warning about the probable consequences of “intentionless free speech”: “Because public-concern robotic speech will resist regulations, fueled by First Amendment doctrine, overwhelmed by the inevitable Russian-sponsored botnets, and afflicted with algorithms and constant confirmation bias, Americans may find themselves trapped in a toxic Trumpian dystopia of computerized lies. Discourse, of course, will be dead”; “An Old Libel Lawyer Confronts Robotica’s Brave New World,” in Collins and Skover, Robotica, 99.
9.  Collins and Skover explain, citing Miller v. California (1973), that the First Amendment does not necessarily require a finding of intention in the speech that it protects, using non-obscene pornographic speech as a case in point: the average community member’s interpretation is dispositive, not the pornographer’s intent (39, 43).
10.  Joel Rose, “Even If It’s ‘Bonkers,’ Poll Finds Many Believe QAnon And Other Conspiracy Theories,” NPR Morning Edition, December 30, 2020,
https://www.npr.org/2020/12/30/951095644/even-if-its-bonkers-poll-finds-many-believe-qanon-and-other-conspiracy-theories.
11.  This line of cases has been typically understood to have originated with Virginia State Board of Pharmacy v. Virginia Citizens Consumer Council (1976), in which the U.S. Supreme Court recognized free speech protection for advertising, and notoriously culminating in Citizens United v. Federal Election Commission (2010), which argued that money is speech. However, there was also an important earlier case, Grosjean v. American Press Company (1936), which recognized media corporations’ free speech protections based on their status as corporate persons. For more discussion of this backstory in relation to corporate personhood, see Lisa Siraganian, Modernism and the Meaning of Corporate Persons (Oxford: Oxford University Press, 2020), 84–94.
12.  Ryan Calo paraphrases Helen Norton’s and Bruce Johnson’s critiques in “Robotica in Context: An Introduction to the Commentaries,” in Collins and Skover, Robotica, 73.
13.  Steven Knapp and Walter Benn Michaels, “Against Theory,” in Against Theory: Literary Studies and the New Pragmatism, ed. W.J.T. Mitchell (Chicago and London: University of Chicago Press, 1982), 15. Hereafter cited in the text followed by the page number.
14.  See Elizabeth Anscombe: “Intentional actions, then, are the ones to which the question ‘Why?’ is given application, in a special sense which is so far explained as follows: the question has not that sense if the answer is evidence or states a cause, including a mental cause”; G. E. M. [Elizabeth] Anscombe, Intention, 2nd ed. (Ithaca: Cornell University Press, 1976), 24.
15.  John Glenn, Lonnie E. Griffith, Jr., William Lindsley, and Karl Oakes, “Agency § 5,” in Corpus Juris Secundum (Thomson West, Feb 2021), np.
16.  Chopra and White would dispute this notion of agency; they make the case for AI as a legal agent relying in part on the philosophical theories of Daniel Dennett and Donald Davidson, both of whom have a causal, largely physicalist account of intention. That view is not shared by Knapp, Michaels, Fish, myself, or longstanding precepts of agency law; Samir Chopra and Laurence F. White, A Legal Theory for Autonomous Artificial Agents (Ann Arbor: University of Michigan Press, 2011), 11–25. Chopra also makes a consequentialist argument about AI legal agency: “by treating an artificial agent as a legal subject as opposed to an inanimate object, we categorize these entities more appropriately and better protect our privacy rights”; Samir Chopra, “Computer Programs Are People, Too,” The Nation, May 29, 2014, https://www.thenation.com/article/archive/computer-programs-are-people-too/. But consequentialist arguments have obvious weaknesses and flaws.
17.  The claim that algorithms are agents is asserted as customary but never fully justified in Collins and Skover’s book; they suggest that algorithms like Siri function “in the realm in which computers and robots are typically viewed as agents driven by and responsive to the dictates of their principals” (27). These moves can be seen as instances of a larger trend of attempting to discount intention by distributing agency, discussed in Siraganian, “Distributing Agency Everywhere: TV Critiques Postcritique,” for special issue, “Literary Criticism after Postcritique,” ed. Tim Lanzendörfer and Mathias Nilges, Amerikastudien/ American Studies 64, no. 4 (2019): 595–616, https://amst.winter-verlag.de/article/amst/2019/4/10.
18.  The legal interpretive version of Fish’s (new) position is fleshed out Stanley Fish, “There Is No Textualist Position,” San Diego Law Review 42, no. 2 (2005): 629–50. In the same issue, see also Steven Knapp and Walter Benn Michaels, “Not a Matter of Interpretation,” San Diego Law Review 42, no. 2 (2005): 651–68.
19.  Stanley Fish, “Intentional Neglect,” New York Times, July 19, 2005.
20.  This strange combination of acknowledging language without reading it is another version of what Knapp and Michaels describe as “theory,” insofar as it entails a two-step process in which knowledge of words is separated from an interpretation of them (“Against Theory,” 25).
21.  Toril Moi, Revolution of the Ordinary: Literary Studies After Wittgenstein, Austin, and Cavell (Chicago and London: University of Chicago Press, 2017), 131.
22.  N. Katherine Hayles, “Can Computers Create Meanings? A Cyber/Bio/Semiotic Perspective,” Critical Inquiry 46, no. 1 (Autumn 2019): 51.
23.  Hayles, “Computers,” 48.
24.  Wendy Wheeler, cited by Hayles, employs the notion of “biosemiotics … the study of the interweaving natural and cultural sign systems of the living,” to make a similar point, arguing that “the meaning of something is to be discovered in what it does in the world, in how it allows things to be and also to change … a sign is anything which can be interpreted and thus given meaning”; Expecting the Earth: Life, Culture, Biosemiotics (London: Lawrence & Wishart, 2016), 4, 7.
25.  Hayles, “Computers,” 55.
26.  Daniel Dennett, “Will AI Achieve Consciousness? Wrong Question,” Wired, February 19, 2019, http://www.wired.com/story/will-ai-achieve-consciousness-wrong-question/.
27.  Alexander and Prakash, “Is That English You’re Speaking,” 967–96; Larry Alexander, Is There a Right of Freedom of Expression? (New York: Cambridge University Press, 2005); Frederick Schauer, “Intentions, Conventions, and the First Amendment: The Case of Cross-Burning,” Supreme Court Review 2003, no. 1 (2003): 197–230.
28.  Leslie Kendrick, “Free Speech and Guilty Minds,” Columbia Law Review 114, no. 5 (2014): 1255–95; Kendrick, “Are Speech Rights for Speakers?,” Virginia Law Review 103, no. 8 (December 2017): 1767–1808. See also Kendrick, “Use Your Words: On the ‘Speech’ in ‘Freedom of Speech,’” Michigan Law Review 116, no. 5 (March 2018): 667–704.
29.  Kendrick, “Guilty Minds,” 1256–57. To be clear, Kendrick is far from alone in equating mental intent with intention; that premise is ubiquitous in legal discussions of intention. See, for example, Schauer, who has a diametrically opposed view from Kendrick’s but shares her definition of the term: he challenges “the widely accepted view that speaker’s intent is an important component of First Amendment analysis,” understanding intention as both causal and an idea in the mind, as when he contrasts “what an agent intends and what actually occurs”; Schauer, “Intentions,” 199, 197.
30.  Kendrick, “Guilty Minds,” 1259.
31.  This issue is complicated. For a more involved discussion of the intent/intention divergence in law and literary studies, see Lisa Siraganian, “My Interdisciplinary Uncanny Valley,” Post45 Contemporaries, May 26, 2021, https://post45.org/2021/05/my-interdisciplinary-uncanny-valley/.
32.  Siraganian, Corporate Persons, 47, 66–74.

 

 

 

Show CommentsClose Comments

Leave a comment

0.0/5