A major AI firm’s CEO has publicly cast doubt on the deep cautionary insights of AI pioneers and visionaries.
Speaking in a recent podcast, the CEO of Nvidia has warned that apocalyptic rhetoric around AI is doing “a lot of damage” to the industry and public debate, arguing that persistent doomsday narratives risk chilling both innovation and investment in safer, more capable systems.
In the podcast, Huang had described 2025 as a “battle of narratives” in AI, with pessimistic voices dominating public discussion and crowding out more grounded conversations about how to deploy the technology responsibly at scale. The CEO said that prominent technologists and academics have helped entrench an “end of the world” storyline that he considers closer to science fiction than policy guidance, and that this framing is “not helpful to people”, “not helpful to the industry” and “not helpful to society” or governments trying to understand the technology.
Huang argued that when “90% of the messaging” focuses on existential risk, it can discourage the practical investments needed to make AI systems safer, more functional and more broadly useful in the real economy. Without identifying specific sources, Huang also raised the specter of regulatory capture, criticizing firms that publicly urge governments to impose more AI rules. “No company should ask the government for more regulation,” he said, contending that such advocacy is “deeply conflicted” because corporate leaders are “obviously” acting in their own interest and, in some cases, may be seeking to lock in advantages over smaller rivals.
Shallow views versus Murphy’s law
The remarks underscore his longstanding disagreements with leading AI safety advocates over the technology’s economic and social impact. He had previously pushed back on warnings that advanced AI could eliminate a large share of entry-level white-collar roles, arguing instead that new tools will augment workers and create fresh categories of employment.
The intervention comes as Huang’s firm continues to underpin most large-scale AI training and inference infrastructure in America, giving the company outsized influence on how the sector evolves. Observers have likened the growth of AI to that of the dot-com bubble, where explosive growth had masked unsustainable promises. Huang’s views conveniently shield his firm from scrutiny on issues such as escalating energy demands or the ethical implications of weaponized models.
Dismissing calls for regulation as “regulatory capture” ignores how Huang’s own advocacy against US export controls has been framed as self-serving policy complaints rather than principled stands, underscoring a pattern of rhetoric aligned with commercial self-preservation amid stock sensitivities.
In stark contrast, the cautious voices Huang critiques draw from decades of pioneering work by AI luminaries such as Geoffrey Hinton, Yoshua Bengio, and Stuart Russell — Turing Award winners whose foundational contributions to deep learning lend unmatched credibility to their warnings. These experts, who have signed open statements equating AI extinction risks to pandemics or nuclear war, base their positions on rigorous analysis of misalignment, competence without values alignment, and real-world scaling challenges, not speculative optimism.