The preview launch of an AI-powered vulnerability detection tool has renewed calls for a federal agency to regulate AI ethics.
After Anthropic’s recent limited release of Mythos reignited fears about how much power a single company can wield over frontier models, news reports have reiterated a prominent Canadian computer scientist and AI pioneer’s recommendation to the USA to create an FDA-style body to regulate advanced AI development.
Prof Yoshua Bengio’s standing warning centers on the idea that the most capable systems can quickly become national-security tools, making ad hoc corporate judgment an inadequate substitute for public oversight. He argues that AI should be treated more like a high-stakes technology than a consumer app, with formal review before deployment, and stronger rules for how advanced models are released. The analogy using the Food and Drug Administration reflects his view that society needs an independent gatekeeper to evaluate risks, set conditions, and slow down dangerous launches when necessary.
The limited-release Mythos had exposed a core problem in frontier AI: one private firm had decided who could use the model, while many governments and businesses were left waiting for access to a system that could affect their security posture. That concentration of control, Bengio argues, shows why oversight should not depend on voluntary enterprise restraint alone.
Mythos concerns
The AI appMythos was reportedly rolled out only to a narrow set of partners, including selected US firms and government contacts, because of fears that hackers could misuse its capabilities to probe networks or steal data. Anthropic’s cautious approach won some praise, but it also raised broader questions about whether the world is comfortable letting one company decide when a powerful AI is safe enough to share.
Bengio’s proposal is tied to cybersecurity, but his concern is wider: if advanced models can be used to damage critical infrastructure, then their release has consequences that cross borders. He wants governments, especially the United States, to impose clearer obligations on enterprises, and to build international coordination into AI governance rather than relying on patchwork national responses.