China’s DeepSeek Delays R2 AI Model After Huawei Chip Setback, Reverts to Nvidia Gear

Chinese AI startup DeepSeek has hit a major roadblock in its race to launch the R2 model, after repeated failures to train the system on Huawei’s Ascend AI processors. The company, which had been under pressure to showcase China’s ability to cut reliance on U.S. technology, has now reverted to Nvidia GPUs to get the project back on track.
According to people familiar with the matter, the Ascend chips fell short on multiple fronts. Performance instability, weak software support, and sluggish interconnect speeds undermined efforts to train the massive R2 model. Even with Huawei dispatching engineers to help onsite, DeepSeek couldn’t complete a successful training run.
The pivot back to Nvidia is a blow for China’s ambition to build a homegrown AI hardware ecosystem. While Huawei’s chips may still be used for less demanding inference workloads, Nvidia’s GPUs remain unmatched for large-scale training—despite U.S. export restrictions limiting supply of top-tier models.
The delays have already cost DeepSeek valuable time. The R2 model was originally slated for release in May, but with the shift back to Nvidia hardware, competitors like Alibaba’s Qwen3 have gained a head start. Industry analysts say the episode underscores the gap between China’s domestic chips and Nvidia, not just in raw power but in software maturity and developer tooling.
Still, some experts argue the setback doesn’t mean Huawei is out of the race. They note that the company’s hardware has improved significantly over the past few years, and future iterations could close the gap. For now, though, the reality is clear: when it comes to training frontier AI models, DeepSeek—and China’s AI sector more broadly—remains tethered to Nvidia
Comments
No comments yet. Be the first to comment!
Leave a Comment