Tests show the latest chatbot model pulls from a controversial encyclopedia tied to extremist sources and unverified claims.
The latest large‑language model, GPT‑5.2 from OpenAI, has begun citing AI‑generated encyclopedia Grokipedia in responses to sensitive geopolitical and historical questions, raising fresh concerns about the spread of misinformation through mainstream chatbots.
Tests by The Guardian had found that ChatGPT referenced Grokipedia nine times when answering more than a dozen queries, including questions about Iran’s political structure and the biography of British historian Sir Richard Evans, who served as an expert witness in a Holocaust‑denial libel case.
In some instances, the model amplified claims not clearly present in more established reference sources. For example, ChatGPT asserted a connection between Iran’s supreme leader’s office and telecommunications firm MTN‑Irancell beyond what appears in Wikipedia, and repeated biographical details about Evans that originated in Grokipedia. Notably, the model did not cite Grokipedia when answering prompts about the 6 January 2021, US Capitol incident or alleged media bias against Donald Trump, suggesting it applies internal filters that steer away from the encyclopedia on highly scrutinized “culture‑war” topics — while still relying on it for narrower or less visible subjects.
Grokipedia is an AI‑authored alternative to Wikipedia that does not allow direct human editing. A Cornell University-led analysis has shown that the platform cites neo‑Nazi forum Stormfront 42 times, conspiracy‑oriented outlet Infowars 34 times, and white‑nationalist site VDare 107 times, drawing criticism for embedding extremist and low‑credibility references into its entries. Researchers and watchdog groups have described Grokipedia as a vehicle for “cloaking” misinformation and amplifying right‑wing narratives on issues ranging from HIV/AIDS to US politics.
OpenAI has told The Guardian that its web‑search‑augmented systems aim to draw from a broad range of publicly available sources and viewpoints, while applying safety filters to reduce links tied to “high‑severity harms”. The firm’s citations feature is meant to show which sources informed a given answer, but the reliance on Grokipedia on sensitive topics has unsettled experts who warn that AI‑to‑AI sourcing can create recursive loops of unverified or biased information.
xAI has dismissed media scrutiny with a standard response characterizing mainstream outlets as “legacy media lies” even as governments and advocacy groups push for tighter oversight of Grok and Grokipedia amid broader concerns about AI‑fueled disinformation and harmful content.