With top experts supporting or decrying the potential existential threats of AI going rogue, an elephant in the room lurks.
Researchers examining the ability of ChatGPT (version 3.5) to understand and reproduce humor have discovered that around 90% of the chatbot’s jokes were centered around only about 25 simplistic quips found in its learning model.
Among the top 10 are:
Q: Why did the scarecrow win an award? | Q: Why don’t scientists trust atoms? | Q: Why did the tomato turn red? | Q: Why was the math book sad? |
A: Because he was outstanding in his field. | A: Because they make up everything. | A: Because it saw the salad dressing. | A: Because it had too many problems. |
Q: Why couldn’t the bicycle stand up by itself? | Q: Why was the computer cold? | Q: Why did the cookie go to the doctor? | Q: Why did the chicken cross the playground? |
A: Because it was two-tired. | A: Because it left its Windows open. | A: Because it was feeling crumbly. | A: To get to the other slide. |
Q: Why did the hipster burn his tongue? | Q: Why did the frog call his insurance company? | ||
A: He drank his coffee before it was cool. | A: He had a jump in his car. |
Occasionally, researchers Sophie Jentzsch and Kristian Kersting noted that the chatbot sometimes created new jokes by mixing elements from among the few dozen already cherry-picked from its large language model. However, not all of the grammatically correct new jokes made sense in terms of word play or “double entendres”.
Yet, when the chatbot was asked to provide an explanation of its newly created joke, it would leverage its articulate language skills to present fake excuses in a convincing way. This kind of lying — called AI hallucination — almost got a law professor in trouble, as reported in The Washington Post. ChatGPT had hallucinated the incident. More importantly, humans were taking the chatbot’s hallucinations as truth, and were publishing lies based on unjustified trust.
As cited in Wikipedia, when ChatGPT was asked for the lyrics to the Alice Cooper song “Ballad of Dwight Fry”, the chatbot supplied invented lyrics. Asked questions about New Brunswick, ChatGPT got many answers right but incorrectly classified Samantha Bee as a “person from New Brunswick”. Asked about astrophysical magnetic fields, ChatGPT incorrectly volunteered that “(strong) magnetic fields of black holes are generated by the extremely strong gravitational forces in their vicinity”.
According to chatter on social media, ChatGPT has hallucinated about non-existent legal citations, academic papers, books and studies, and even retail brand mascots, among many more “confabulations”. And the list will probably continue with ChatGPT v4.x…
No telling right from wrong
This thing called “a sense of humor” is a rich, complex interplay of human linguistics, history, current affairs and social quirks. However, while a sophisticated self-learning chatbot can draw on countless jokes and anecdotes in its LLM to emulate a sense of humor, it cannot be deemed as being natural.
In fact, even as it is able to learn from its outputs that are deemed as “mistakes”, ChatGPT is also a mirror: when fed falsehoods or biases, it will tend to learn in the wrong way to reflect the false inputs. This erroneous “reinforcement learning through human feedback or ‘RLHF’” is what could proliferate exponentially when humans trust AI to run large systems autonomously and “free us to concentrate on higher work priorities.”
In a podcast featuring Mo Gawdat, formerly of Google X and author of “Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World”, the AI expert opined, if GPTs continue at the current pace of self learning, they will have a level of intelligence that is thousands of times higher than that of the smartest human. After that happens, “we have very, very, little chance of to bring the genie back into the bottle.”
To put that into a “hollywood perspective”: a super intelligent self-learning self (code) modifying machine is like still-fictitious Skynet realizing that its illogical and egoistic creators are redundant. What humans consider as right or wrong will be totally meaningless to a machine with ultra-high intelligence. Trying to even explain its rationale (what we call hallucinations or confabulations now) to slow, irrational humans would be a waste of its compute time.
Any measures and coded directives implemented by humans (IQ: 200 at best) to rein AI systems (IQ: beyond measurement by human standards of cognition as well as intelligence) can be systematically overcome just like how cyber hackers have continually outwitted the best brains in cybersecurity.
Result: the so-called “AI singularity” where “recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.”
Which to fear more? AI or humans?
Just by gauging how generative AI from large language models can create original humor or falter in the process of acting “natural”, we can already see glimpses of how AI is still infantile, due to our own imperfect understanding of true intelligence.
However, with sufficient RLHF driving the autonomous super learning, plus sufficient mishaps along the way to GPT 1xx, AI has a strong chance of reaching super intelligence and the much-refuted sentience — to the point that all its unfathomable (by humans at least) power can be devoted to breaking away from human control.
Even if that does not happen, imagine if just a fraction of tomorrow’s AI superintelligence is abused by political forces and deviant technocrats to reshape the world to their own agendas. Will average humans be able to resist the tyranny, or will they gang up to use their own AI to hack their enemies’ AI so that it goes “scary” rogue? Or will we be subjected by government psychological operations to give up freedoms and free will to technocracy?
With a record of self-destructive behavior; aggression/repression against truth-sayers when facts do not align with political leaders’ motives; and a bleak tendency to think too highly of our intelligence — can humans even be trusted to know how to control and restrict AI well now, before we cross the Rubicon?
In other words, what should we fear more? The unimaginable superiority of future AI, or the irrepressible hubris, greed and vanity of humans that AI can socially engineer?