Chinese business tycoon and co-founder of Alibaba Group, Jack Ma-backed Ant Group has reportedly developed AI training techniques using domestically produced chips that cut costs by 20%, according to sources familiar with the matter.
By using hardware from Chinese giants like Alibaba and Huawei, Ant used the Mixture of Experts (MoE) machine learning approach to achieve performance comparable to Nvidia’s H800 chips, which is a feat that shows the growing efficacy of local semiconductor technology.
While Ant still strongly relies on Nvidia’s GPUs for its AI development, though, it is now planning to take a huge shift towards alternative chips, including those from AMD and Chinese manufacturers. Amid U.S. President Donald Trump’s aggressive stance, Chinese tech firms are gradually reducing reliance on U.S.-made advanced semiconductors, especially after Washington tightened export controls on high-performance chips like the H800.
According to the report, Ant’s MoE models have the potential to compete directly with U.S. tech giants like Google and OpenAI. MoE models are rapidly gaining traction, and are famous for dividing tasks into particular segments to enhance performance.
For instance, the buzzing Chinese AI startup, DeepSeek has already displayed the cost-effectiveness of this technique. This approach also encouraged other players and tech companies like Ant.
Cost Efficiency Without Premium GPUs and Ant’s Benchmark Success
Ant’s cost-effective approach stands in stark contrast to Nvidia’s growth strategy. While Nvidia CEO Jensen Huang has consistently argued that computation demand will only increase, Ant’s breakthrough tells a different story. Training 1 trillion tokens traditionally cost around ¥6.35 million using high-performance GPUs.
However, Ant’s optimized technique trims that down to ¥5.1 million by using lower-specification hardware. This is a commendable achievement that could democratize access to powerful AI systems for smaller players.
In the research paper, Ant revealed that its models can outperform Meta’s models in many benchmarks. Both Ling-Plus (290 billion parameters) and Ling-Lite (16.8 billion parameters) made success in achieving impressive results against DeepSeek’s equivalents on Chinese-language benchmarks.
Also, Ling-Lite even surpassed Meta’s LIama models in English-language benchmarks, which shows Ant’s growing proficiency in multilingual AI development.
Source: https://www.bloomberg.com/news/articles/2025-03-24/jack-ma-backed-ant-touts-ai-breakthrough-built-on-chinese-chips
Latest Stories:
Vertiv Introduces Solutions for AI-Ready Data Centres
OpenAI Unveils Audio Models to Enhance Voice AI Capabilities