At the center of the controversy are thousands of non-consensual sexual images created by a generative chatbot, and poor corporate accountability.
Indonesia and Malaysia have swiftly imposed restrictions on AI chatbot Grok due to recent incidents where it had been misused in creating non-consensual explicit images (particularly targeting women and minors).
On 10 January, 2026, Indonesia became the first nation worldwide to block access to Grok, citing grave risks to human rights and digital safety. Malaysia’s Communications and Multimedia Commission (MCMC) had followed suit the next day, 11 January, temporarily barring local users amid repeated instances of the tool generating offensive sexual content and manipulated imagery without permission.
Reports indicate peak usage saw over 6,700 such images created hourly in early January, affecting hundreds of victims including underage individuals. Indonesia’s Minister of Communication and Digital Affairs, Meutya Hafid, had condemned these deepfakes as “serious violations of dignity and citizen security in digital spaces”, demanding explanations from Musk’s X platform.
Similar reactions from other countries
MCMC has highlighted repeated abuses producing pornographic, indecent, and manipulated content, underscoring the platform’s failure to curb harms proactively. Regulators had issued prior warnings, but met limited response. The MCMC then sent formal notices to X and xAI on 3 January and 8 January, urging robust technical safeguards and content moderation. X’s replies on 7 January and 9 January had relied on user reporting, deemed inadequate for addressing inherent AI design flaws.
Meanwhile, MCMC has insisted restrictions will persist until protections, especially against child exploitation content, are implemented. India’s IT Ministry has demanded fixes within 72 hours; France
has launched a probe; the UK’s Ofcom is conducting an urgent review; and Australia’s PM has denounced the content as “utterly unacceptable”.
With Grok integrated into X — Indonesia’s third-largest user base — these actions signal escalating demands for AI accountability amid fears of unchecked deepfake proliferation. Victims and advocates urge swift, binding global standards to prevent digital abuse.