Tuesday, October 14, 2025

Google’s AI Overviews Clarify Made-Up Idioms With Assured Nonsense

Language can appear nearly infinitely advanced, with inside jokes and idioms typically having which means for only a small group of individuals and showing meaningless to the remainder of us. Because of generative AI, even the meaningless discovered which means this week because the web blew up like a brook trout over the power of Google search’s AI Overviews to outline phrases by no means earlier than uttered.

What, you’ve got by no means heard the phrase “blew up like a brook trout”? Positive, I simply made it up, however Google’s AI overviews end result instructed me it is a “colloquial means of claiming one thing exploded or turned a sensation shortly,” doubtless referring to the eye-catching colours and markings of the fish. No, it does not make sense.

You have atlas

The development might have began on Threads, the place the creator and screenwriter Meaghan Wilson Anastasios shared what occurred when she searched “peanut butter platform heels.” Google returned a end result referencing a (not actual) scientific experiment during which peanut butter was used to display the creation of diamonds below excessive strain.

It moved to different social media websites, like Bluesky, the place individuals shared Google’s interpretations of phrases like “you possibly can’t lick a badger twice.” The sport: Seek for a novel, nonsensical phrase with “which means” on the finish.

Issues rolled on from there.

Screenshot of a Bluesky post by sharon su @doodlyroses.com that says "wait this is amazing" with a screenshot of a Google search for "you can't carve a pretzel with good intentions." The Google AI Overview says: The saying "you can't carve a pretzel with good intentions" is a proverb highlighting that even with the best intentions, the end result can be unpredictable or even negative, especially in situations involving intricate or delicate tasks. The pretzel, with its twisted and potentially complicated shape, represents a task that requires precision and skill, not just good-will. Here's a breakdown of the saying: "Carve a pretzel": This refers to the act of making or shaping a pretzel, a task that requires careful handling and technique.

Screenshot by Jon Reed/CNET

A Bluesky post by Livia Gershon @liviagershon.bsky.social that says "Just amazing" and has a screenshot of a Google search AI Overview that says "The idiom "you can't catch a camel to London" is a humorous way of saying something is impossible or extremely difficult to achieve. It's a comparison, implying that attempting to catch a camel and transport it to London is so absurd or impractical that it's a metaphor for a task that's nearly impossible or pointless.

Screenshot by Jon Reed/CNET

This meme is attention-grabbing for extra causes than comedian reduction. It exhibits how giant language fashions may pressure to supply a solution that sounds appropriate, not one which is appropriate.

“They’re designed to generate fluent, plausible-sounding responses, even when the enter is totally nonsensical,” stated Yafang Li, assistant professor on the Fogelman School of Enterprise and Economics on the College of Memphis. “They aren’t educated to confirm the reality. They’re educated to finish the sentence.”

Like glue on pizza

The pretend meanings of made-up sayings deliver again reminiscences of the all too true tales about Google’s AI Overviews giving extremely improper solutions to primary questions — like when it steered placing glue on pizza to assist the cheese stick.

This development appears not less than a bit extra innocent as a result of it does not heart on actionable recommendation. I imply, I for one hope no one tries to lick a badger as soon as, a lot much less twice. The issue behind it, nevertheless, is identical — a big language mannequin, like Google’s Gemini behind AI Overviews, tries to reply your questions and provide a possible response. Even when what it offers you is nonsense.

A Google spokesperson stated AI Overviews are designed to show info supported by prime net outcomes, and that they’ve an accuracy charge akin to different search options.

“When individuals do nonsensical or ‘false premise’ searches, our methods will attempt to discover essentially the most related outcomes primarily based on the restricted net content material out there,” the Google spokesperson stated. “That is true of search general, and in some circumstances, AI Overviews will even set off in an effort to supply useful context.”

This explicit case is a “knowledge void,” the place there is not quite a lot of related info out there for the search question. The spokesperson stated Google is engaged on limiting when AI Overviews seem on searches with out sufficient info and stopping them from offering deceptive, satirical or unhelpful content material. Google makes use of details about queries like these to raised perceive when AI Overviews ought to and mustn’t seem.

You will not at all times get a made-up definition if you happen to ask for the which means of a pretend phrase. When drafting the heading of this part, I searched “like glue on pizza which means,” and it did not set off an AI Overview.

The issue does not seem like common throughout LLMs. I requested ChatGPT for the which means of “you possibly can’t lick a badger twice” and it instructed me the phrase “is not a normal idiom, nevertheless it undoubtedly sounds just like the form of quirky, rustic proverb somebody may use.” It did, although, attempt to provide a definition anyway, primarily: “If you happen to do one thing reckless or provoke hazard as soon as, you won’t survive to do it once more.”

Learn extra: AI Necessities: 27 Methods to Make Gen AI Work for You, In response to Our Specialists

Pulling which means out of nowhere

This phenomenon is an entertaining instance of LLMs’ tendency to make stuff up — what the AI world calls “hallucinating.” When a gen AI mannequin hallucinates, it produces info that sounds prefer it might be believable or correct however is not rooted in actuality.

LLMs are “not truth turbines,” Li stated, they simply predict the subsequent logical bits of language primarily based on their coaching.

A majority of AI researchers in a current survey reported they doubt AI’s accuracy and trustworthiness points can be solved quickly.

The pretend definitions present not simply the inaccuracy however the assured inaccuracy of LLMs. Whenever you ask an individual for the which means of a phrase like “you possibly can’t get a turkey from a Cybertruck,” you most likely anticipate them to say they have not heard of it and that it does not make sense. LLMs typically react with the identical confidence as if you happen to’re asking for the definition of an actual idiom.

On this case, Google says the phrase means Tesla’s Cybertruck “is just not designed or able to delivering Thanksgiving turkeys or different comparable objects” and highlights “its distinct, futuristic design that isn’t conducive to carrying cumbersome items.” Burn.

This humorous development does have an ominous lesson: Do not belief every part you see from a chatbot. It is perhaps making stuff up out of skinny air, and it will not essentially point out it is unsure.

“This can be a good second for educators and researchers to make use of these eventualities to show individuals how the which means is generated and the way AI works and why it issues,” Li stated. “Customers ought to at all times keep skeptical and confirm claims.”

Watch out what you seek for

Since you possibly can’t belief an LLM to be skeptical in your behalf, you might want to encourage it to take what you say with a grain of salt.

“When customers enter a immediate, the mannequin simply assumes it is legitimate after which proceeds to generate the most definitely correct reply for that,” Li stated.

The answer is to introduce skepticism in your immediate. Do not ask for the which means of an unfamiliar phrase or idiom. Ask if it is actual. Li steered you ask “is that this an actual idiom?”

“That will assist the mannequin to acknowledge the phrase as an alternative of simply guessing,” she stated.


Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles