AI browser launch spotlights “zero click” risks and potential enterprise impact from insecure AI-generated applications.
Less than a few days after the launch of the Chromium-based ChatGPT Atlas browser on 21 October 2025 (US time), security researchers have already uncovered significant vulnerabilities in the software, with new flaws affecting both the browser extension Atlas and core server infrastructure.
Atlas, the ChatGPT browser extension, came under additional investigation after users reported erratic behavior and suspected data leakage through its interface. Security research revealed that some session information and prompt details could sometimes be intercepted by malicious actors, especially in enterprise contexts where large deployments increase the attack surface.
The most serious discovery was a zero click vulnerability, which marks the first time a service-side vulnerability was reported in ChatGPT servers. These flaws allow attackers to potentially access user data or manipulate system operations without requiring any interaction from the victim, increasing overall risk for organizations relying on AI-assisted workflows.
Industry reports accompanying the security announcements showed that about 45% of AI-generated code routinely fails standard security tests, and is prone to introducing new vulnerabilities into production environments. Organizations often integrate such AI output directly, unaware of latent flaws or exploitable code paths.
The emergence of these vulnerabilities has prompted a rapid security review by both OpenAI and third-party researchers. The identification of zero click vulnerability, in particular, is causing heightened scrutiny, with developers patching the server flaw within hours of disclosure.
In parallel, experts have called for greater transparency regarding the handling of user data, improved auditing of AI-generated outputs, and more frequent vulnerability testing for both application-layer and infrastructure components.
The broader concern stemming from these security findings is the rapid adoption of generative AI in mission-critical settings without adequate controls or independent code review. Industry experts have echoed the need to prioritize rigorous vulnerability scanning and to treat AI-driven code releases as high-risk assets.