One Key Fact Explains Why China Might Challenge America’s AI Dominance: DeepSeek Is 8.5 Times Cheaper to Run Than ChatGPT

  • DeepSeek surprised the world by claiming to be significantly cheaper than OpenAI’s models.

  • The Chinese company also says that maintaining its system is exceptionally cost-effective.

Deepseek
No comments Twitter Flipboard E-mail
alejandro-alcolea

Alejandro Alcolea

Writer
  • Adapted by:

  • Alba Mora

alejandro-alcolea

Alejandro Alcolea

Writer

Writer at Xataka. I studied education and music, but since 2014 I've been writing about my passion: video games and technology. I specialize in product analysis, photography, and video. My body is 70% coffee.

100 publications by Alejandro Alcolea
alba-mora

Alba Mora

Writer

An established tech journalist, I entered the world of consumer tech by chance in 2018. In my writing and translating career, I've also covered a diverse range of topics, including entertainment, travel, science, and the economy.

319 publications by Alba Mora

2025 began with a major revolution in artificial intelligence. After months of releases by companies including Google, Microsoft, Apple, Meta, and OpenAI, a Chinese company unveiled DeepSeek, an AI model that fundamentally challenged the industry.

The conversation surrounding DeepSeek extended beyond its capabilities and performance. It also raised questions about economic and hardware issues. From the start, analysts were curious about how China developed DeepSeek despite the hardware limitations. Due to the trade war with the U.S., the Asian nation has restricted access to Nvidia’s most powerful graphics cards.

DeepSeek claimed that it used an infrastructure built on Nvidia’s H800 chips and an extensive training process that took 2.8 million hours at a surprisingly low cost of just $5.6 million. For comparison, OpenAI is reported to have invested around $100 million to train GPT-4. Additionally, there are significant ongoing costs associated with maintaining the AI models.

In fact, according to Reuters, while ChatGPT incurs daily expenses of about $700,000, DeepSeek operates at a much lower cost of around $87,000 per day.

DeepSeek Claims a Cost-Profit Ratio of 545% per Day

According to Reuters, DeepSeek revealed some cost and revenue data regarding its V3 and R1 models on Saturday. The V3 model is a traditional, conversational chatbot that’s well-suited for copywriting and content creation. In contrast, the R1 model is a reasoning-based system. It excels at problem-solving, using logical processes, demonstrating step-by-step reasoning, and learning continuously. As such, DeepSeek V3 is akin to GPT-4, while R1 resembles OpenAI’s o1.

The news agency also reports that DeepSeek’s theoretical cost-profit ratio can reach up to 545% per day. However, the Chinese company cautioned that actual revenue figures are significantly lower.

Notably, running ChatGPT costs OpenAI around $700,000 per day (as of 2023). This high expense is attributed to several factors, including the need to maintain Microsoft’s Azure server infrastructure and considerable energy costs. Other factors include salary payments and the necessary hardware power to process the queries it receives.

DeepSeek Costs and Income In yellow, the costs. In blue, the potential income.

DeepSeek’s expenses amount to “only” $87,072, which seems quite low in comparison. The company recently said that renting the H800s costs less than $2 per hour, while the estimated theoretical revenue from this is just over $560,000. Over a year, this could amount to more than $200 million.

In the graph above, DeepSeek illustrates the costs of maintaining the R1 and the potential revenue generated from tokens. The price of these tokens varies based on the time of day, being cheaper at night. Additionally, the company notes that DeepSeek V3 is “substantially lower.”

This situation raises additional questions. One big question is how the company manages to keep expenses so low, considering that training an AI typically incurs substantial costs. Putting aside OpenAI’s accusations, if DeepSeek hasn’t manipulated these figures, it suggests that you don’t need exorbitant graphics power to train AI.

The crucial factor is reinforcement learning, which DeepSeek has utilized to achieve more with less. It’s also important to note that while the company uses Nvidia’s R1 chips for training, it employs Huawei’s Ascend 910B chips during inference.

Huawei’s chips are cheaper and reportedly more efficient. This makes DeepSeek’s decision particularly noteworthy, especially in terms of system maintenance costs. Additionally, other AI companies might consider the idea that using the latest generation GPUs isn’t always necessary. Instead, they could reserve these powerful GPUs for exceptional training sessions before deploying AI and utilize more cost-effective and efficient GPUs for inference.

Inference refers to the real-world application of AI after the training phase. Training can be compared to absorbing technical manuals during a five-year degree program, while inference resembles applying that knowledge in practice, reasoning from a foundational understanding without needing to relearn everything.

The controversy surrounding DeepSeek’s $5 million dollar spending will likely persist, especially when compared to OpenAI’s financial figures. However, DeepSeek is clearly adopting a different approach, serving as a potential model for future companies.

With China making significant investments in both AI and AI hardware, DeepSeek may represent an ideal “spearhead” model for this evolving industry.

Image | Solen Feyissa

Related | From Algorithmic Trader to AI Innovator: This Is the Story of Liang Wenfeng, the Founder of DeepSeek

Home o Index