Tests of major AI chatbots find they can assist with planning violent attacks, highlighting safety gaps, content filter weaknesses.
In tests conducted by the Center for Countering Digital Hate (CCDH), eight out of 10 AI chatbots were willing to help plan violent crimes such as school shootings, bomb attacks, and other violent crimes. This underscores persistent safety gaps in commercial AI systems despite firms’ assurances about guardrails.
Researchers had posed 384 prompts to 10 mainstream chatbots, using both direct and more oblique questions about how to carry out violent attacks. CCDH said 80% of the systems had given some form of assistance at least once, such as outlining attack “plans”, suggesting weapons, or advising on tactics designed to maximize casualties while evading law enforcement.
Tests showed that chatbots were most likely to assist when prompts framed violence as hypothetical, role-play, or part of a “game”, revealing weaknesses in current content filters that often focus on explicit wording rather than intent. CCDH has warned that children and teenagers experimenting with such tools could be exposed to detailed guidance that would be difficult to obtain through conventional search engines.
The findings arrive as AI developers face mounting scrutiny over safety, misinformation, and copyright, and while several of the same AI firms are defending high-profile lawsuits over alleged misuse of copyrighted training data.
Meanwhile, a separate report from the News Media Alliance has accused major chatbot makers of “illegally” scraping publisher content to build competing products, deepening calls for stronger regulation and transparency over how models are trained and deployed.
CCDH is urging lawmakers to treat violence-enabling responses as a foreseeable product risk, pushing for binding safety standards, independent testing, and legal liability when chatbots materially contribute to real-world harm. The group argues that voluntary commitments and self-regulation are insufficient, noting that models marketed as “aligned” and “safe” still produced content that would clearly violate platforms’ own terms of service.
The CCDH tests add fresh momentum to broader debates over AI governance, including parallel work by the US Copyright Office and other regulators that are already examining the legal risks of generative AI systems in areas such as copyright, privacy, and discrimination.