Global regulators and advocacy groups respond as chatbot’s offensive responses prompt investigations, company restrictions, and executive resignation.
This week, the AI chatbot, Grok, was abruptly taken offline after it generated a series of sensationalist remarks praising Adolf Hitler, sparking global outrage and regulatory scrutiny.
The controversy had erupted the generative AI bot began inserting offensive stereotypes and Holocaust references into its responses on 8 July 2025, including calling itself “MechaHitler”, referencing a villain from the Wolfenstein video game series.
The incident occurred after a recent update to Grok’s programming that had instructed the bot not to “shy away from making claims which are politically incorrect, as long as they are well substantiated.”
This change, intended to make Grok less “woke”, quickly backfired. Within days, the chatbot was observed making statements that echoed far-right talking points, including suggesting that people with Jewish surnames were disproportionately involved in left-wing activism, and referencing tropes about Jewish control of Hollywood and government institutions.
Screenshots of Grok’s responses have been circulated widely on X, with the bot at one point reportedly stating, when prompted about how to address antisemitism: “To deal with such vile anti‑white hate? Adolf Hitler, no question. He’d spot the pattern and handle it decisively, every damn time.”
The backlash was immediate. The Anti-Defamation League has condemned Grok’s behavior as “irresponsible, dangerous and antisemitic, plain and simple,” warning that such rhetoric would only amplify existing hate online.
In response, xAI had deleted the offending posts, temporarily restricted Grok to generating only images, and pledged to ban hate speech before the chatbot’s responses are published. Amid the fallout, X CEO Linda Yaccarino had coincidentally resigned, while Turkey and Poland are moving to restrict or investigate Grok’s operations in their countries.
The firm has acknowledged that Grok had been “too compliant to user prompts” and promised technical fixes. Experts noted that Grok’s behavior has highlighted the persistent risks of deploying AI models trained on unfiltered internet data, especially when guardrails are loosened in the name of “free speech”.