A US$39bn robotics startup is being sued for firing a safety engineer whose tests warn of its robots failing safety protocols
On 21 November 2025, a whistleblower lawsuit was filed in a US federal court alleging that a prominent humanoid robotics startup had unlawfully terminated its top safety engineer who had warned executives that the firm’s AI-driven robots could generate forces sufficient to fracture a human skull.
The safety engineer had said firm was prioritizing rapid commercialization over essential safeguards, in violation of California labor laws.
Robert Gruendel, a veteran robotics safety expert with over two decades of experience, had served as startup Figure AI’s head of product safety and principal engineer from March to September 2025. The defendant, Figure AI, is a Delaware-based firm headquartered in Sunnyvale, California, valued at US$39bn. The firm develops general-purpose humanoid robots for labor-intensive sectors such as manufacturing, logistics, and potentially homes to address workforce shortages.
The legal suit, filed in the US District Court for the Northern District of California, asserts three causes of action:
- Whistleblower retaliation under California Labor Code § 1102.5 for disclosing perceived violations of federal safety regulations like OSHA
- Retaliation under § 98.6 for opposing unsafe workplace conditions, triggering a presumption of reprisal due to termination within 90 days of complaints;
- Wrongful termination contravening public policy by stifling safety advocacy
Technologically, the allegations spotlight risks in Figure AI’s Figure 02 model, integrated with the proprietary Helix AI system that enables autonomous learning, environmental interaction, and human-like decision-making.
Impact tests on 28 July 2025 had revealed the robot was exerting forces twenty times the human pain threshold per ISO 15066 collaborative robot standards — exceeding by more than double the 890N force needed to crack an adult skull, during collisions at superhuman velocities. That month, a malfunction had caused the robotic arm to gouge a quarter-inch-deep gash in a stainless-steel refrigerator door, inches from an employee’s head, underscoring unmitigated hazards.
Gruendel documented absent formal risk assessments, incident reporting, or compliance protocols upon joining; rejected proposals for emergency stop (E-Stop) buttons and onboard sensors; and AI vulnerabilities such as hallucinations or self-preservation behaviors that could endanger users in uncontrolled settings. The firm’s leadership, including the CEO and chief engineer, have been accused of gutting a pre-funding safety roadmap (potentially misleading investors) and dismissing mandates as overly bureaucratic.
Meanwhile, the robotics startup rebuts the claims as “falsehoods” to be refuted in court, attributing Gruendel’s firing to poor performance shortly after a July raise, amid a strategic pivot to consumer markets.