Four sources cite tensions between impartial analysis and promotional roles, amid other team-member exits and scaled-back job vulnerability studies.
Recent news reports by WIRED and Tech Buzz have brought to light the case of an economist resigning from an AI big tech firm in September 2025 due to challenges in producing impartial studies amid pressures to align outputs with company interests.
Based on four sources and internal documents, the report noted that economist Tom Cunningham had highlighted tensions between rigorous economic analysis and serving as an informal promotional entity for OpenAI’s AI tools in his farewell note.
This exit aligns with at least one other departure from the team, coinciding with OpenAI’s release of a report claiming its products save enterprise users 40 to 60 minutes daily.
In an internal memo the firm’s Chief Strategy Officer Jason Kwon had responded, arguing OpenAI must go beyond identifying AI risks to actively develop fixes, given its leading role in deploying the technology.
The economic team, now led by Chief Economist Aaron Chatterji, reports to Chief Global Affairs Officer Chris Lehane, who directs lobbying for federal AI oversight to override state rules. OpenAI has scaled back similar studies to its 2023 paper on AI-vulnerable sectors, according to WIRED’s anonymous sources familiar with OpenAI’s internal operations.
This pattern echoes prior high-profile resignations tied to research autonomy. Miles Brundage, former head of policy research and AGI Readiness advisor, had left in October 2024, noting difficulties speaking freely as OpenAI grew, leading to his team’s disbandment. Such moves reflect internal shifts prioritizing commercial and policy goals over unvarnished safety or economic critiques. These sources indicate OpenAI had been reluctant to release studies like the 2023 “GPTs are GPTs” paper, which analyzed automation risks across sectors such as clerical support and legal professions.
Broader departures amplify concerns, including safety researchers like those from the AGI Readiness and superalignment teams in 2024. Recent cases involve mental health research leads and others questioning OpenAI’s balance of scale versus caution. Critics see this as evidence of evolving priorities in AI firms navigating ethics, politics, and market dominance.