
In the latest announcement, the Chinese AI Model DeepSeek has launched an upgraded version of its open-source V3 Large Language Model, ‘DeepSeek-V3-0324’.
The new update includes “significant improvements” over its previous versions. It adds new parameters and capabilities that aim to improve the coding and solving of mathematical problems.
The new model, which debuted on January 29, 2025, has outperformed industry leaders like OpenAI and Anthropic in various performance benchmarks.
DeepSeek V3 With Mathematical Prowess
According to DeepSeek’s data, the model achieved a stellar 59.4 on the American Invitational Mathematics Examination (AIME), which is a huge step from its predecessor’s 39.6. It also recorded a 10-point jump on LiveCodeBench, hitting 49.2. Such gains position DeepSeek’s latest model as a formidable contender in the AI race.
DeepSeek-V3-0324 boasts 685 billion parameters and operates under the MIT software license, which is widely popular among developers on GitHub. The model is designed using the “Mixture-of-Experts (MoE)” architecture, It enhances computational efficiency while achieving a large parameter scale. This approach allows DeepSeek to compete with rival top AI models at a fraction of the cost.
AI researcher Kuittinen Petri from Häme University of Applied Sciences tested the model by asking it to generate a responsive front page for an AI company. In just 958 lines of code, DeepSeek-V3-0324 produced a flawless, mobile-friendly website. He said, “Anthropic and OpenAI are in trouble. DeepSeek is achieving this with just 2% of the resources.”
Mathematical performance has also seen a boost. Jasper Zhang, a gold medalist from the International Math Olympiad and a UC Berkeley graduate, tested the model with an AIME 2025 problem. “It solved it smoothly, with no errors,” Zhang posted on X.
Source: https://www.forbes.com/sites/tylerroush/2025/03/25/deepseek-launches-ai-model-upgrade-amid-openai-rivalry-heres-what-to-know/
Latest Stories: