Here’s a nice little distraction from your workday: Head to Google, type in any made-up phrase, add the word “meaning,” and search. Behold! Google’s AI Overviews will not only confirm that your gibberish is a real saying, it will also tell you what it means and how it was derived.
This is genuinely fun, and you can find lots of examples on social media. In the world of AI Overviews, “a loose dog won’t surf” is “a playful way of saying that something is not likely to happen or that something is not going to work out.” The invented phrase “wired is as wired does” is an idiom that means “someone’s behavior or characteristics are a direct result of their inherent nature or ‘wiring,’ much like a computer’s function is determined by its physical connections.”
It all sounds perfectly plausible, delivered with unwavering confidence. Google even provides reference links in some cases, giving the response an added sheen of authority. It’s also wrong, at least in the sense that the overview creates the impression that these are common phrases and not a bunch of random words thrown together. And while it’s silly that AI Overviews thinks “never throw a poodle at a pig” is a proverb with a biblical derivation, it’s also a tidy encapsulation of where generative AI still falls short.
As a disclaimer at the bottom of every AI Overview notes, Google uses “experimental” generative AI to power its results. Generative AI is a powerful tool with all kinds of legitimate practical applications. But two of its defining characteristics come into play when it explains these invented phrases. First is that it’s ultimately a probability machine; while it may seem like a large-language-model-based system has thoughts or even feelings, at a base level it’s simply placing one most-likely word after another, laying the track as the train chugs forward. That makes it very good at coming up with an explanation of what these phrases would mean if they meant anything, which again, they don’t.
“The prediction of the next word is based on its vast training data,” says Ziang Xiao, a computer scientist at Johns Hopkins University. “However, in many cases, the next coherent word does not lead us to the right answer.”
The other factor is that AI aims to please; research has shown that chatbots often tell people what they want to hear. In this case that means taking you at your word that “you can’t lick a badger twice” is an accepted turn of phrase. In other contexts, it might mean reflecting your own biases back to you, as a team of researchers led by Xiao demonstrated in a study last year.
“It’s extremely difficult for this system to account for every individual query or a user’s leading questions,” says Xiao. “This is especially challenging for uncommon knowledge, languages in which significantly less content is available, and minority perspectives. Since search AI is such a complex system, the error cascades.”
Compounding these issues is that AI is loath to admit that it doesn’t know an answer. When in doubt, it makes stuff up.
“When people do nonsensical or ‘false premise’ searches, our systems will try to find the most relevant results based on the limited web content available,” said Google spokesperson Meghann Farnsworth in an emailed statement. “This is true of search overall, and in some cases, AI Overviews will also trigger in an effort to provide helpful context.”
Google won’t return an AI Overview result for every query like this. “I did about five minutes of experimentation, and it’s wildly inconsistent, and that’s what you expect of GenAI, which is very dependent on specific examples in training sets and not very abstract,” says Gary Marcus, a cognitive scientist and author of Taming Silicon Valley: How We Can Ensure That AI Works for Us. “The idea that any of this mess is close to AGI [artificial general intelligence] is preposterous.”
This specific AI Overview quirk seems ultimately harmless, and again, it’s a fun way to procrastinate. But it’s helpful to keep in mind that the same model coughing up that confident mistake is the one behind your other AI-generated query results. Take it with a grain of salt.