A California class-action lawsuit alleges a Big Tech firm had covertly activated AI spying in its email, chat, and meetings features.
What would you do if you found out your regularly updated and patched web browser has been secretly spying on your private emails, chats, and video calls?
This is the allegation in a recent class-action lawsuit filed against Google in San Jose, California, accusing the firm of covertly activating its Gemini AI assistant across Gmail, Google Chat, and Google Meet in October 2025 without users’ consent.
The suit claims that Google disabled the previous opt-in model, enabling Gemini to access and analyze users’ entire communication histories — including emails, attachments, chats, and video meeting transcripts — without their knowledge, violating the California Invasion of Privacy Act of 1967, which prohibits such secret monitoring without consent from all parties involved.
Despite users being given the option to disable it, plaintiffs argue the opt-out process is buried deep in privacy settings, deterring many from successfully opting out. As a result, the AI assistant continues harvesting data by default in the background, which plaintiffs describe as a massive breach of trust exposing private information on an unprecedented scale.
The complaint emphasizes that the AI effectively “reads” every email and message users send or receive, exploiting this data for AI-powered features without clear or explicit permission.
Google has not publicly responded to the lawsuit allegations beyond affirming that Gemini operates within its privacy guidelines. However, privacy experts highlight that this legal challenge could set a crucial precedent defining the limits of AI integration in consumer services and the necessity of explicit informed consent, reinforcing user autonomy and privacy protections amid the expanding use of AI technologies. The lawsuit also adds to ongoing scrutiny of tech giants over how their AI tools collect, use, and train on personal data, punctuating calls for stronger transparency and governance frameworks around AI ethics and data privacy.
The outcome could fundamentally impact how firms deploy AI assistants by default, and the standards for privacy disclosures and user consent in the AI era, rethink AI product rollouts and privacy-by-design practices, and offer straightforward opt-out mechanisms to respect consumer privacy rights.