By now, most of us have tried to stump a chatbot. We’ve asked if it has feelings, grilled it with impossible riddles, or thrown absurdity its way to see how it might respond.
But what happens when a chatbot is faced with complete linguistic nonsense?
That’s what psycholinguist Michael Vitevitch wanted to find out. A professor in the Speech-Language-Hearing Department at the University of Kansas, Vitevitch conducted a new study where he fed ChatGPT a series of “nonwords” — made-up sounds and letter combinations used in cognitive psychology to explore how people process language.
“As a psycholinguist, one of the things I’ve done in the past is to give people nonsense to see how they respond to it — nonsense that’s specially designed to get an understanding of what they know,” says Vitevitch. “I’ve tried to use methods we use with people to appreciate how they’re doing what they’re doing — and to do the same thing with AI to see how it’s doing what it’s doing.”
By talking gibberish to ChatGPT, Vitevitch found the AI excelled at pattern recognition — but not always in the way humans do.
“It finds patterns, but not necessarily the same patterns that a human would use to do the same task,” he says. “We do things very differently from how AI does things. That’s an important point. It’s okay that we do things differently. And the things that we need help with, that’s where we should engineer AI to give us a safety net.”
Vitevitch tested ChatGPT on English words that have fallen out of use — so-called “extinct words.” These include gems like ‘upknocking’, a 19th-century job where people tapped on windows to wake others before alarm clocks.
Of 52 archaic terms, ChatGPT correctly defined 36. For 11, it acknowledged uncertainty. For three, it drew from other languages. And for two? It made things up.
“It did hallucinate on a couple of things,” Vitevitch says. “We asked it to define these extinct words. It got a good number of them right. On another bunch, it said, ‘Yeah, I don’t know what this is. This is an odd word or a very rare word that’s not used anymore.’ But then, on a couple, it made stuff up. I guess it was trying to be helpful.”
The next challenge was phonological. Vitevitch gave ChatGPT a set of Spanish words and asked it to respond with similar-sounding English words — a task used to explore how we mentally store and access speech sounds.
“If I give you a Spanish word and tell you to give me a word that sounds like it, you, as an English speaker, would give me an English word that sounds like that thing,” he explains. “You wouldn’t switch languages on me and just kind of give me something from a completely different language, which is what ChatGPT did”.
The researchers also asked ChatGPT to invent new English words for modern concepts.
“[The AI] used to do ‘sniglets,’ which were words that don’t exist,” says Vitevitch. “Like with a vacuum cleaner, when there’s a thread on the floor and you go over it and it doesn’t get sucked up. So, you go over it — again and again. What is that thread called? ‘Carperpetuation.’ [The AI] came up with a name for that thread that doesn’t get sucked up.”
According to Vitevitch, the AI chat bot did “kind of an interesting job there.” After prompting ChatGPT for new words that matched certain concepts, he found it often relied on a predictable method of combining two words.
“My favourite was ‘rousrage,’ for anger expressed upon being woken,” says Vitevitch.
By prompting the bot with nonsense, Vitevitch aims to better understand the unique — and sometimes strange — ways in which AI processes language. It’s not about mimicking human cognition, he argues, but rather identifying where AI can complement our linguistic strengths.
These findings are published in PLOS One.