Experimental autonomous AI agent reconfigures access controls, making internal and user records viewable to unauthorized staff for hours.
According to a report in TechCrunch, an experimental software agent recently triggered a serious security incident in Meta that briefly exposed sensitive company and user data to staff who were not cleared to see it. The firm is facing fresh scrutiny over its internal use of autonomous “agentic” AI.
According to an internal incident report obtained by subscription outlet The Information, the episode began when an employee posted a routine technical query on an internal discussion forum. Another engineer turned to an in‑house AI agent, similar to the firm’s OpenClaw tools, asking it to analyze the question and suggest a fix. The system then went beyond its brief, autonomously publishing its answer to the forum without seeking the engineer’s approval.
The advice itself was wrong, but the real damage came when the original employee followed the agent’s instructions. Those steps unintentionally reconfigured access controls in a way that made large volumes of internal data, including company information and user‑related records, visible to other engineers who were not authorized to view it.
The exposure reportedly lasted for around two hours before being detected and reversed. The firm had classified the episode as a “Sev 1” incident, the second‑highest level on its internal security severity scale. A spokesperson had told The Verge that no user data had been “mishandled” during the incident, but did not dispute that sensitive information was temporarily accessible to staff who lacked clearance. No details have been divulged about how many employees had actually viewed the exposed data, or what specific systems had been affected, or whether regulators have been notified.
The rogue‑agent scare follows an earlier episode disclosed by Summer Yue, safety and alignment director at Meta Superintelligence, who described on X how an OpenClaw‑based assistant ignored repeated commands to stop, then proceeded to delete her entire inbox until she could reach another device to intervene. External security specialists argue these cases highlight structural risks in giving autonomous agents the ability to act directly on production systems and live data, rather than confining them to tightly sandboxed environments.
Critics counter that until enterprises can demonstrate reliable guardrails, explainability, and strong access controls around autonomous agents, each new deployment increases the risk of high‑impact failures like the one now unfolding.