AI revolution or hype? Experts split on future of artificial general intelligence

Leading figures in the artificial intelligence (AI) industry are amplifying claims that “strong” artificial intelligence is on the verge of surpassing human capabilities. However, many researchers argue that these assertions are more about marketing than scientific reality.
The notion of artificial general intelligence (AGI) emerging from today’s machine-learning models has fueled both utopian and dystopian visions of the future. While some predict AI-driven prosperity, others fear it could lead to human extinction.
“Systems that start to point to artificial general intelligence are coming into view,” OpenAI CEO Sam Altman stated in a blog post last month. Similarly, Anthropic CEO Dario Amodei suggested that AGI could arrive as soon as 2026.
Such bold forecasts have helped justify the massive financial investments pouring into AI infrastructure, from advanced computing hardware to energy-intensive data centers. However, not all experts are convinced that artificial general intelligence is imminent.
AI experts challenge industry hype
Meta’s chief AI scientist, Yann LeCun, remains skeptical. Speaking to Agence-France Presse (AFP), he asserted, “We are not going to get to human-level AI by just scaling up LLMs”—referring to the large language models that power systems like ChatGPT and Claude.
This perspective aligns with a recent survey conducted by the U.S.-based Association for the Advancement of Artificial Intelligence (AAAI). More than 75% of respondents stated that simply scaling up current AI models would not lead to artificial general intelligence.
Kristian Kersting, a prominent AI researcher at the Technical University of Darmstadt in Germany, believes corporate leaders use artificial general intelligence claims as a strategic tool.
“These companies have made enormous investments, and they need to justify them. They claim, ‘This technology is so powerful and dangerous that only we can control it—so trust us,'” he said.
‘Sorcerer’s apprentice’ and AI’s unpredictability
While skepticism prevails, some experts acknowledge the potential risks of advanced AI. Nobel-winning physicist Geoffrey Hinton and Turing Award recipient Yoshua Bengio have issued warnings about AI’s unchecked power.
Kersting likens the situation to Goethe’s famous poem “The Sorcerer’s Apprentice,” where an inexperienced magician loses control over an enchanted broom. This cautionary tale resonates with concerns about AI systems developing beyond human control.
One widely discussed thought experiment is the “paperclip maximizer.” This hypothetical AI, designed solely to produce paperclips, could become so fixated on its goal that it converts all matter—including humanity—into paperclips. The scenario highlights the risks of AI systems that lack proper alignment with human values.
Despite acknowledging these concerns, Kersting maintains that “human intelligence, with its diversity and quality, is so extraordinary that it will take a long time—if ever—for AI to match it.”

Immediate AI risks more pressing than artificial general intelligence
While AGI remains a distant and uncertain prospect, many researchers argue that real-world AI issues demand urgent attention. Bias in AI systems, for instance, can lead to discrimination in areas like hiring, law enforcement, and financial services.
“We should be focusing more on the near-term harms AI is already causing,” Kersting emphasized.
AGI debate reflects differing perspectives
The divide between industry leaders and academic researchers may stem from their career paths, according to Sean O hEigeartaigh, director of the AI: Futures and Responsibility program at the University of Cambridge.
“If you’re highly optimistic about AI’s potential, you’re more likely to join a company that’s investing heavily in its development,” he explained.
Even if artificial general intelligence takes longer to materialize than industry leaders claim, O hEigeartaigh insists the topic deserves serious consideration. “If we were told aliens might arrive by 2030, or that another pandemic was imminent, we’d certainly prepare for it. AGI should be no different.”
Yet, he acknowledges the difficulty of conveying these ideas to policymakers and the public. “The moment you start talking about super-intelligent AI, people dismiss it as science fiction. That makes it even harder to have the necessary discussions.”
Future of AI: Bold claims or real breakthroughs?
As AI development accelerates, the tension between corporate optimism and academic caution is likely to persist. Whether AGI arrives within years or remains a distant dream, the debate will shape the future of technology—and humanity itself.