It seems we’re almost touching artificial general intelligence (AGI) with our fingers. This isn’t just me saying it: Several leading voices in the technology field have been saying it for some time now.
- OpenAI CEO Sam Altman believes AGI will be here in a few thousand days, but, of course, he needs to create hype to raise more funding.
- The slogan of xAI, Tesla CEO Elon Musk’s startup—known for its unfulfilled promises—is that with its AI, people can “understand the universe.”
- Nvidia CEO Jensen Huang also believes AGI will be here in five years (and he’ll be selling GPUs in the meantime).
- DeepMind CEO Demis Hassabis seems to agree with Huang, although, admittedly, Google seems a bit more cautious about these claims.
But that’s all just promises, expectations, and smoke. The unbridled optimism in this industry has created a colossal gold rush, with investments in new startups and especially data centers—hello, Stargate—absolutely spectacular and more typical of a bubble. Can these expectations be met? Sure. But nothing guarantees: 1) when users will get AGI and, especially, 2) that they’ll get it.
And that’s a big problem because expectations for AI development have skyrocketed, and that’s dangerous. Is it a promising advance? Is it changing the world? At the moment, not very much.
Other technological revolutions in the past took time and generated mistrust and skepticism in their early days. In fact, there are famous cases of real technological slaps in the face.
- Former IBM CEO Thomas Watson said in 1943, “I think there is a world market for maybe five computers.”
- Microsoft co-founder Bill Gates allegedly said, “640K ought to be enough for anybody,” although he later denied it.
- His good friend, former Microsoft CEO Steve Ballmer—who has even more money than he does—laughed at the iPhone when it came out.
- Robert Metcalfe, co-inventor of the Ethernet standard, predicted in 1995 that it “will soon go spectacularly supernova and collapse in 1996.” He later admitted his mistake and ate his words.
That’s a lot of big mistakes by people who theoretically knew much about what they were talking about. And they all prove one thing: Predicting the future isn’t only impossible but dangerous. That makes it clear we might have to give AI developments a (big) chance.
We’ll Never Have an AI System Equal to Einstein or Newton
Users today expect too much from AI development. This is exactly what Thomas Wolf, co-founder and chief science officer of Hugging Face, argued in a short but brilliant essay on X. According to him, what AI developers promised—or are still talking about—is very different from what users have.
They promised that AI systems would revolutionize the world of science and that there would be new drugs, materials, and discoveries. The reality is that while there’s some really promising news, there are no revolutions yet.
For Wolf, users have “a country of yes-men on servers,” meaning that while AI systems are assertive and express their opinion firmly and confidently, they usually don’t challenge users. More importantly, they don’t challenge what they know.
As he explained, many think that people like Newton or Einstein were excellent students and that genius comes from extrapolating from those genius students. As if giving AI systems the ability of the world’s best students is enough. But it’s not.
“To create an Einstein in a data center, we don’t just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask.”
It’s a powerful message and probably true. While Altman claimed that superintelligence could accelerate scientific discovery, and Anthropic CEO Dario Amodei claimed AI development would help formulate cures for most types of cancer, the reality is different.
And the reality, according to Wolf, is that AI systems don’t generate new knowledge “by connecting previously unrelated facts.” They simply fill in the gaps of what humans already know. This may be a somewhat negative statement because AI systems manage to generate new knowledge and new content by connecting the facts they’re trained on. We’ve seen this recently in microbiology, for example, and in all those text, image, and video works that make us wonder what creativity is and whether machines can become creative.
Wolf isn’t alone in this discourse. Former Google engineer François Chollet, who now heads the ARC Prize, agrees. According to him, AI systems can memorize reasoning patterns—used in reasoning models such as O1 or DeepSeek R1—but are unlikely to reason independently and adapt to new situations.
According to Wolf, today’s AI developments resemble outstanding, highly disciplined students—but ones that never question what they’ve been taught. They have no incentive to challenge their knowledge or propose ideas that contradict their training data. Instead, they merely answer questions that have already been asked. Wolf argues that people need AI systems capable of asking, “What if we’ve been wrong about it all along” even when all existing research suggests otherwise.
The solution he proposes is to move away from current benchmarks. He talks about an “assessment crisis” that causes tests to focus on questions with clear, obvious, closed answers. Instead, AI systems should be valued for their ability to “take bold counterfactual approaches.” That they can ask “non-obvious questions” that lead them to choose “new research paths.”
“We don’t need an A+ student who can answer every question with general knowledge. We need a B student who sees and questions what everyone else missed.”
And he may be right, of course. There’s some debate about the problem of scaling—models aren’t getting much better, despite using more resources and data than ever before—and it doesn’t seem to be the way to go for AGI.
It seems that companies have woken up and are looking for other ways. New reasoning models seem to be a more promising path, and they’re indeed succeeding in finding bold solutions. We saw it recently with those AI models that cheated to win at chess, for example. Ilya Sutskever, co-founder of OpenAI and now pursuing an AGI, has also made it clear that he’s following a different path than the one that led to the development of ChatGPT.
Will he succeed where others have failed? Who knows. But for him and others, Wolf’s perspective is crucial. Perhaps what users truly need aren’t AI systems that say yes to everything but ones that challenge what people know—or think they know.
Image | Xataka On with Freepik
View 0 comments