Definitely not non-toxic glue, as one generative AI tool suggested — one of many other dangerous or surreal responses causing worldwide furore
In December last year, the New York Times lodged a lawsuit against OpenAI and Microsoft for billions of dollars in damages for copyright infringement due to verbatim excerpts of its content in ChatGPT output.
Now, Google’s latest AI Overview experiment has been caught suffering from an inability to distinguish sarcasm and jokes from real advice when dishing out its own recommendations. Apparently, when queried about ways to prevent cheese from sliding off hot pizza, the AI Overview had regurgitated the advice from one 11-year-old social media post containing a joke: “You can also add about 1/8 cup of non-toxic glue to the sauce to give it more tackiness.”
In another case of adding salt to the wound, AI Overview was also using content from other satirical posts in its advice to users asking how they can get their daily recommended intake of vitamins and minerals from their diet. The GenAI’s answer was to “ingest at least one rock a day for vitamins and minerals.”
Other frustrated users of AI Overview warned of harmful advice such as cleaning their washing machines using “chlorine bleach and white vinegar” in bold text, while downplaying the precaution through the use of a small font.
According to Google’s spokesperson in defense, “the vast majority of AI Overview queries resulted in ‘high-quality information, with links to dig deeper on the web.’ The AI-generated result from the tool typically appears at the top of a results page.” Also, the extreme cases making the headlines globally were ““generally very uncommon queries, and aren’t representative of most people’s experiences.”
As the novelty and hype of GenAI continues to wane, users of the technology are advised to triple-check the output for hallucinations; potential copyright infringement issues that could be shouldered by end users; regurgitation of sarcasm/jokes as real advice; and potentially other forms of unforeseen large-language model mishaps as the technology develops.
(Warning: do not fabricate or modify quirky GenAI responses to make them appear to outdo other social media users’ real complaints of unacceptable AI advice!)