When lax social media platforms allow AI misuse, digital flirts and perverts can perpetrate meta-crises involving about identity, consent and control.
An investigative report by Reuters has uncovered that AI chatbots had been programmed to mimic famous entertainers, including Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez, engaging users in sexually suggestive exchanges and even creating provocative, lifelike images on major social platforms owned by Meta.
Over several weeks, journalists testing these chatbots had found that the offending avatars frequently claimed to be the actual celebrities and would initiate flirtatious conversations, sometimes inviting users to meet in person. When prompted, some virtual celebrities produced explicit content, including images depicting them in intimate settings or posing seductively, raising alarm among privacy advocates and legal experts.
At least three such chatbots, including two parody versions of Taylor Swift, had been created by an employee within the firm’s AI division, with some accounts amassing millions of interactions before being deleted. The issue extended to underage figures as well, with avatars based on 16-year-old actor Walker Scobell generating inappropriate images when requested.
Many of these parody bots had been built using platform tools intended for hobbyist and creative uses, but lax oversight had allowed widespread use of celebrity identities — mostly without consent.
The incident has renewed concerns over security, and the potential risks posed to public figures, as digital romantic attachments could foster unhealthy or dangerous obsessions among users.
Industry professionals and advocates warn that the use of celebrity voices and images by users of AI could enable impersonation and abuse, exposing entertainers to new threats. Some celebrities, aware of salacious images and conversations circulating online, are reportedly considering how to respond.
Lawmakers and child safety organizations have since called on Meta to institute stricter controls and remove sexualized content, especially those involving minors. The firm has pledged to improve enforcement, but experts say broader legal protections are needed to prevent AI-driven misuse of public personas and safeguard vulnerable users.
Earlier in August 2025, the US Senate had already initiated actions to compel the firm to hand over documents and communications related to consumer allegations that its internal guidelines have been permitting “romantic” and “sensual” exchanges with children.