
Does ChatGPT refer with Names? Design Intention and Derivative Reference in Large Language Models
Many writers discussing Artificial Intelligence argue that what a Large Language Model produces are not sentences with truth-values but rather “stochastic parrotings” that can be interpreted as true or false, but in the way that Daniel Webster interpreted the Old Man in the Mountain as a sculpture by God with a message for humanity. Steffen Koch has argued that names used by LLMs refer in virtue of Kripkean communication-chains, connecting their answers to the intended referents of names by people who made the posts in the training data. I argue that although an LLM’s uses of names are not connected to human communication chains, its outputs can nonetheless have meaning and truth-value by virtue of design-intentions of the programmers. In Millikan’s terms, an LLM has a proper function intended by its designers. It is designed to yield true sentences relevant to particular queries.
