Ancient Rhetoric illuminates LLM "Hallucination"

We can't figure out LLM "hallucination" without first rethinking modern assumptions about the value of mimicry

Ancient Rhetoric illuminates LLM "Hallucination"
A stochastic parrot contemplates its imitation of reality

There was a time and a place where mimicry — sounding like some character or historical figure would sound — was a much more valued skill than it is today. A common exercise in ancient Greek and Roman rhetoric was to sound like an historical figure, to imitate their voice and make arguments as if from their perspective. A wide variety of ancient Mediterranean cultures would have been at home with the idea that how one imitates is defining of one’s character and that, in turn, that character was defining of what we might today label an “individual.” We tend not to talk in those terms nowadays, but it’s worth keeping that shift of assumptions in mind. The fact that rhetoric has fallen so far out of favor, replaced by the cult of individuality and originality, makes us miss and misunderstand important aspects of so-called LLM “hallucination.”

It’s easy to look down on that mimicry, and use the term “hallucination” to separate out the idea that there is something generative (that's a good thing) vs. simply regurgitating. Even if we use a term like "confabulation", there's still a negative spin there. It's not a good thing to be a confabulator, a liar, a mimic, a copy-er. Even the notion of "generative" AI is a telling bit of branding. It is more accurately regenerative, reviving, zombie-ification of pieces of other things.

There are lots of reasons for that modern preference for framing this all as machine creativity and generation rather than copying and regeneration — not least of all a means of countering the obvious legal problems of ingesting so much training data by emphasizing instead the newness, the “generation” over the regeneration. But the other consequence of this modern ideological preference is that we miss the resonance and value of mimicry. We naturally devalue mimicry. At the risk of gross oversimplification, our current moment seems defined by the tension between expressions of self, whereby being derivative is not a good thing, and the reality that the entirety of internet culture is a rehashing of the past, pastiche writ large.

What is striking about this is not the tension itself (consistency is not really what human biology is optimized for after all) but that our emphasis and terminology predisposes us to elevate the dichotomy of real vs. fake. Rather than see the continuum from effective mimicry to ineffective mimicry, we tend nowadays to draw a line — all in the eyes of human beholders — between something we might call a legitimate response and some other set of things which are illegitimate response. This foregrounds a particular hierarchy of values, encoding a long history of thought around what matters in imitation. To a certain extent, imitation that is good enough can stand in for reality. There’s something particularly post-modern about this, accepting the idea that language can stand in for the thing, that reality can be constructed, that the shape of something can be good enough. Each of those connects us to a beat in the history of ideas, whether Plato’s notion of which kinds of imitation to allow, Descartes’ skepticism, or the post-modern explosion of constructivist thinking… take your pick among the threads.

This tangled web around imitation has follow-on consequences and leads to further myopia. By not recognizing the continuum that is good mimicry and less effective mimicry, we falsely conflate a whole host of reasons for why “hallucination” occurs. It can be too little prompt data, or it can be too much. It can be conflicting data or polluted prompt, or small token changes that have disproportionately large effect. Or focusing on a neologism can send a language model careening down a probabilistic path back to its home domain, the conventional and ordinary, rather than the novel word and its uses. The value-laden and anthropomorphized burden of terms like “hallucination” and even “confabulation” make it more difficult to see that all of these causes and consequences can be present, all as part of making an imitation either better or worse.

We would be better off to take up the old rhetoric model and its approach to imitation as a virtue, or at least as a tool, which can be used for effective argument. The ancients were much less concerned with authenticity or originality as endpoints, much less concerned with a structure which makes such things foundations of the modern sense of self. (Pedantic footnote: it is of course a more complicated story. Plato has not a little to say about imitation and what kinds of imitation are successful. That debate and its terms spin in various directions through the present day. But Plato was not concerned with authenticity in anything like the modern sense.) Up to the modern turn, authenticity was a function more of outward character than inward-looking state. It is not that there is no concept or sense of interiority, but rather that agency and self and thought and all the things of humanity come differently configured and articulated. Imitation can be the path to identity in ways that are virtually unthinkable today. To the extent that copying is part of self today, it is more an act of curation, of echoing the memes and assembling the bits that speak to who you really are. There's a big difference between ancient notions where the right kind of imitation was constitutive of a good person and moderns’ magpie power of accumulating cultural bits as part of one’s diverse sense of self-hood.

What would an LLM look like to Gorgias, or to another ancient rhetorician, to someone steeped in ancient mode of exemplars and mimicry? Would they find it empowering to be able to produce something stylistically so faithful? Would they see its variations as natural to the task of imitation or as errors?

If we take up more of this ancient view of imitation, then we can start to see how mitigating “hallucination” risk is not a matter of avoiding errors or catching errors so much as it is a matter of making subtle distinctions more visible. When we get outputs from LLMs that look right-ish but are in fact wrong-ish, we need a framework to evaluate these outputs as imitations rather than against an imagined ideal of AI intelligence. In fact, thinking of these models as intelligences is particularly problematic for giving us language around “hallucination”; we’ve already lost focus, moving to an imagination of intent and awareness and away from the surface phenomenon which we can more directly see and evaluate.

A matrix of imitation would work very differently from our current way of talking about “hallucination.” Language models can be good imitations of style while being bad imitations of truth; or, they can be good imitations along one dimension (say, for example, word choice) while being bad imitations in other dimensions (connotations and implications). It doesn’t matter what values we think that imitation implies; all that matters is that we measure against some defined reality. In practice, it might be that many benchmarks capture something like this; but measuring LLM performance on, e.g. medical exams, runs into an obvious data science problem, as many have pointed out. It is not known, for most models, whether the material is found in some form in the training data. This is data science gotchas 101. Without reserving a part of the dataset for later validation, you have no way of making assertions about validity based on that data. So most benchmarking of this form needs to be taken with a heavy dose of skepticism.

The rich thousands-year long history of talking about mimicry is not inconsequential, but it has been so far underappreciated. Bender, Gebru, and Mitchell (among others) have done a great service in highlighting the mimetic nature of language models. Subsequent popular discussion around “stochastic parrots” seems focused on other areas, e.g. long term vs. short term harms, technical capabilities and the like — all important things. The long history of imitation is still simmering beneath the surface. Calling LLMs parrots, highlighting the way they act as imitation (or mirrors or any other metaphor one has seen along these lines) is more than an apt metaphor. LLMs are a recent focal point in very old debates around the nature and function of imitation. Thinking around imitation has regularly made people think differently about character (ancient mode) and authenticity (modern mode), the soul (in Plato’s terms) and the self (in more modern terms).

As in so many things, we return to questions of what it means to be human. More of that story, another time —>