AI is now ubiquitous in all areas of technology, and thousands of startups are receiving multimillion-dollar investments from big tech companies and investment funds to develop it. The problem: Some products marketed as cutting-edge AI were actually powered by armies of underpaid programmers posing as chatbots.
The Builder.ai scandal. According to CNBC, the London-based startup promised to revolutionize AI-powered app creation—but it was a massive scam. The company employed 700 programmers in India to simulate an AI assistant named Natasha.
Builder.ai raised $445 million from tech giants like Microsoft and funds from Qatar. The company claimed its no-code, AI-powered platform could build software using modular components like Lego bricks. But the reality was different. After declaring bankruptcy and undergoing creditor intervention, investigators found that behind the tech façade stood only an army of human programmers—natural intelligence, not artificial.
How Builder.ai operated. According to Bernhard Engelbrecht, founder of Ebern Finance, the company presented itself as the ultimate solution for building apps without needing to know how to code. Engelbrecht said, “In reality, customer requests were sent to the Indian office, where 700 Indians wrote code instead of AI.”
The concept of AI-generated programming blocks is now widespread, but in 2018, it was relatively new—especially since the AI arms race fueled by ChatGPT hadn’t yet begun. The idea drew major investment and helped Builder.ai reach a $1.5 billion valuation.
Then came the collapse. The company defaulted on a $50 million loan from Viola Credit in 2023. The creditor seized $37 million from Builder.ai’s accounts, leaving it unable to pay employees or continue operations.
An investigation revealed that Builder.ai had inflated its sales by simulating business with the Indian firm VerSe Innovation. According to a Bloomberg report, this raised concerns about accounting manipulation and made the company appear more solvent than it was. On LinkedIn, Builder.ai said it was “working closely with the appointed administrators” and thanked staff and stakeholders. It admitted that previous mistakes had taken the company beyond recovery
AI didn’t deliver what it promised. The main issue wasn’t that 700 humans stood behind the Natasha AI—it was that the resulting code didn’t work. Engelbrecht said the apps “constantly glitched, the code was unreadable, the functions did not work—in general, everything was like real artificial intelligence.”
Customers received applications that didn’t meet expectations, and sales declined steadily. Phil Brunkard of Info-Tech Research Group told Business Today that many tech firms expanded too quickly without solid finances or truly innovative products. The fall of Builder.ai now raises questions about investor due diligence and corporate transparency.
Chronicle of a scam foretold. Doubts about Builder.ai’s AI system weren’t new. As early as 2019, The Wall Street Journal reported that the startup offered “human-assisted AI” services. In reality, it was the reverse: humans assisted by limited AI tools.
Robert Holdheim, a former executive, sued the company for $5 million, alleging he was fired after raising concerns that Builder.ai’s technology “did not work as advertised and was nothing more than a smokescreen.”
CEO Sachin Dev Duggal told the Journal that “about 60% on average” of its reusable software is produced by machines and the rest is generated by humans. But mounting evidence showed the human input was far greater than the company claimed.
Image | sunrise University (Unsplash)
View 0 comments