TRENDING

Every AI Company Promises AGI Is Coming Soon. The Problem Is That ChatGPT Isn’t the Answer

Although OpenAI CEO Sam Altman and Tesla CEO Elon Musk claim AGI is near, their work suggests they’ve reached the peak of generative AI.

ChatGPT isn't the answer for AGI
No comments Twitter Flipboard E-mail
javier-pastor

Javier Pastor

Senior Writer
  • Adapted by:

  • Karen Alfaro

javier-pastor

Javier Pastor

Senior Writer

Computer scientist turned tech journalist. I've written about almost everything related to technology, but I specialize in hardware, operating systems and cryptocurrencies. I like writing about tech so much that I do it both for Xataka and Incognitosis, my personal blog.

258 publications by Javier Pastor
karen-alfaro

Karen Alfaro

Writer

Communications professional with a decade of experience as a copywriter, proofreader, and editor. As a travel and science journalist, I've collaborated with several print and digital outlets around the world. I'm passionate about culture, music, food, history, and innovative technologies.

521 publications by Karen Alfaro

In December 2022, ChatGPT amazed the world. Two and a half years later, there’s a problem: It hasn’t progressed much. While improved, it now seems more like a distraction from the real promise of AI—to achieve artificial general intelligence, or AGI. And ChatGPT may not be the right path to get there.

Promises and more promises. A few months ago, OpenAI CEO Sam Altman reportedly told President Donald Trump that AGI would arrive before the end of his term. He’s repeated this prediction for months, though he initially said it would take “a few thousand days.” Anthropic CEO Dario Amodei thinks AGI could come as early as 2026. Tesla CEO Elon Musk, who once promised a fully autonomous Tesla by 2016, also pointed to 2026 as AGI’s arrival year.

Why the optimism? Money. Like Altman, many pushing AGI timelines do so to raise funds. Building, training, and running large AI models costs billions. But despite the funding rush, progress is slowing.

Doubts about scaling. Many experts now question whether scaling models—using more GPUs and more training data—still delivers meaningful returns. The latest versions of major foundational models beat their predecessors, but not by much. It feels like they’ve reached their peak.

This isnt the way. For months, researchers have urged the industry to explore other approaches. Cohere founder Nick Frosst said current technology isn’t enough to reach AGI. Generative AI only predicts the next most likely word. Human thinking works very differently.

LeCun: AGI is far off. Respected AI scientist Yann LeCun, who leads Meta’s AI division, has been blunt: ChatGPT won’t match human intelligence. He believes achieving human-level AI will take much longer than Altman suggests.

Sutskever is skeptical too. OpenAI co-founder Ilya Sutskever also doubts that generative AI is the answer. He’s said the technology is barely improving. His new startup, Safe SuperIntelligence, aims to build AGI with “nuclear-level” safety. But so far, he hasn’t revealed how. Notably, it’s a different path from what led to ChatGPT.

A recent survey of AI experts echoed this sentiment: three-quarters said current methods won’t lead to AGI.

Generative AI is no miracle. According to The New York Times, chatbots like ChatGPT do one thing very well—but they don’t outperform humans in most areas. The temptation is to see them as magical, but “these systems are not miracles,” the Times noted. “They are very impressive gadgets.”

ChatGPT doesn’t challenge itself. Thomas Wolf, Hugging Face’s co-founder and chief science officer, acknowledges AGI’s value. But he says it’s far from AGI. He described today’s chatbots as “a country of yes-men on servers.” ChatGPT doesn’t question its knowledge. “We need a system that can ask questions nobody else has thought of or dared to ask,” he said.

A long road ahead. One fundamental gap between AI and human intelligence is physical context. Knowing when to flip toast is a simple but important example. Robotics and sensors may bridge that gap, but challenges like these show how far users are from building machines that truly think like humans.

What about reasoning? Companies have improved chatbots by giving them tools to “reason,” allowing more detailed and accurate responses. This helps reduce hallucinations. But it still doesn’t bring us closer to AGI. These improvements tweak outputs, not the underlying intelligence.

Some hope on the horizon. Other approaches show promise. Researchers are blending neural networks with symbolic reasoning systems to enable abstract thinking and logic. Some are training AI in physically accurate virtual environments. Others are exploring meta-learning—teaching AIs to quickly learn new tasks with little data.

But companies need products now. Despite these efforts, most companies remain focused on AGI. They continue investing heavily in improving current models and applying them to new problems. Consider the wave of AI coding assistants like Cursor, Windsurf, and OpenAI’s new Codex. These tools are useful and marketable, helping justify the platforms they run on.

But they don’t bring us closer to AGI. And that goal—farther off than Altman, Amodei, or Musk suggest—remains out of reach.

Image | Aidin Geranrekab (Unsplash)

Related | Anthropic May Have the Best Generative AI Product, But Even That Doesn’t Guarantee Its Survival

Home o Index