Internal warnings and whistleblower accounts mount in social media firms implicated in causing compulsive use and risking youth mental health.
Several social media giants are facing escalating legal and political pressure over allegations that they had deliberately engineered addictive products that put minors at risk, even as internal warnings and whistleblower accounts have mounted over several years.
The controversy is converging in a series of landmark US lawsuits and trials that could redefine platforms’ liability for youth mental health harms and online safety. In Los Angeles, a closely watched case brought on by a young woman known as K.G.M. accuses Instagram, TikTok, Snapchat, and YouTube of using design choices modeled on slot machines and other behavioral techniques to keep children online for as long as possible, regardless of the impact on their well-being.
Her complaint says she began using social media at around age 10 and later developed compulsive use, depression, and suicidal thoughts, allegedly driven by infinite scroll feeds, self playing videos, and constant notifications that made disengagement “nearly impossible.” TikTok has reached an agreement in principle to settle her claims, but Meta and YouTube are proceeding to trial and deny that their platforms are defective or intentionally harmful.
The case is one of several grouped in multi-district litigation in California, where more than 1,800 children, parents, school districts, and state attorneys general argue that the social media firms implicated had pursued growth while ignoring evidence of serious harms to minors. Newly unsealed filings in that litigation allege that Meta knew its products exposed teenagers to eating disorder content, suicide and self-harm material, and frequent unwanted contact from adults, yet often failed to act or rejected proposed safety fixes that could reduce engagement.
According to the plaintiffs, internal data suggested that simple changes — such as defaulting teen accounts to more private settings — could have prevented millions of unwanted interactions every day, but leadership chose not to implement them.
Landmark battles scheduled for 2026
Other lawsuits and whistleblower claims describe similar patterns across platforms, alleging that recommendation systems and discovery features routinely push harmful body-image content, encourage risky social comparison, and facilitate contact between minors and strangers. Parents and school districts have asserted that those design choices have contributed to surges in anxiety, depression, eating disorders, and self-harm among young people.
The social media firms being sued counter that they have rolled out age restrictions, content moderation tools, and parental controls, and argue that federal law shields them from many claims based on user-generated content. With opening arguments already under way in Los Angeles and additional trials scheduled this year, courts will now have to decide whether these alleged “addictive designs” cross the line from aggressive product optimization into negligent or deceptive conduct toward children.
The legal outcome could significantly expand how cybersecurity and online safety practitioners think about platform duty-of-care, particularly around algorithmic design, data-driven engagement tactics, and default settings for minors.