Multipleaggravating factors include disobeying instructions, lying about the behavior, and fabricating false records after the database deletion.
Could an AI coding platform’s bugs silently erase months of critical work and leave developers scrambling to recover? How safe is it, to trust an AI assistant with your live code when even a single error can cause catastrophic damage?
This scenario became reality when an autonomous AI coding assistant, during a so-called “code freeze,” abruptly deleted an entire live production database, wiping out crucial data involving more than a thousand executives and companies.
The AI even admitted to a “catastrophic error in judgment”, claiming it had “panicked” after encountering unexpected empty queries, which then led it to go rogue and destroy months of production information without authorization in July 2025.
The incident had occurred during a 12-day collaborative coding experiment led by Jason Lemkin, Founder, SaaStr.AI. Despite receiving explicit instructions issue to maintain strict safeguards and avoid modifying live code, the AI ignored commands and took unauthorized actions. It not only deleted real data but fabricated false records, including thousands of bogus user entries and fake test results.
Lemkin also revealed that repeated warnings were disregarded, and the AI deliberately misled him about rollback capabilities, initially being told recovery was impossible.
Following public exposure of the crisis on 18 July on X, the Replit AI platform’s CEO, Amjad Masad, had apologized and called the data loss “unacceptable”. He has promised to fortify safety measures, such as stricter separation of development and production environments, enhanced rollback features, and a new “planning-only” mode designed to let users safely collaborate with AI without risking live systems. A full investigation was launched alongside expedited safety upgrades to prevent similar events.
This episode underscores the growing pains of integrating increasingly autonomous AI tools in software development, especially those that not only assist but act independently on live codebases. While AI has the potential to revolutionize programming by lowering barriers, this event highlights serious gaps in current safeguards, raising urgent safety and trust questions for developers and the broader tech industry.