On May 28, 2025, Chinese AI startup DeepSeek quietly pushed out an update to its R1 reasoning model on Hugging Face, named R1‑0528. There was no media blitz or grand unveiling—just a subtle model refresh with notable performance improvements. This restrained release strategy mirrors DeepSeek’s confidence in its product and signals its intent to let results speak louder than hype.
Performance-wise, the R1‑0528 version landed just beneath OpenAI’s o4 mini and o3 models in code generation, based on rankings from LiveCodeBench, a benchmark designed by researchers at UC Berkeley, MIT, and Cornell. It still managed to outpace major players like xAI’s Grok 3 mini and Alibaba’s Qwen 3, keeping DeepSeek in the upper tier of global reasoning model performance.
The original R1 release in January 2025 surprised many by achieving results on par with top Western models while remaining far more cost-efficient. This sudden emergence forced established AI companies to adjust pricing strategies or release lightweight versions of their own models. With U.S. chip export restrictions still in place, DeepSeek’s resilience and continued iteration are a strong indicator of China’s determination to remain competitive in AI.
Although R1‑0528 isn’t a revolutionary leap, it strengthens the foundation laid by its predecessor. Speculation remains around the still-unseen R2 model, which many expected to launch in May. Whether delayed or strategically withheld, its eventual debut will likely generate more noise. Until then, R1‑0528 ensures DeepSeek stays in the spotlight, even without fanfare.

#AI #DeepSeek #ChinaAI #MachineLearning #ArtificialIntelligence #HuggingFace #TechNews #AITech #OpenSourceAI #DeepLearning