Obtenez 3 mois à 0,99 $/mois

OFFRE D'UNE DURÉE LIMITÉE
Page de couverture de DeepSeek_3.2_AI_Half_Cost_Breakthrough

DeepSeek_3.2_AI_Half_Cost_Breakthrough

DeepSeek_3.2_AI_Half_Cost_Breakthrough

Écouter gratuitement

Voir les détails du balado

À propos de cet audio

Architecture, performance, and impact of DeepSeek 3.2, a new open-source large language model that aims to redefine efficient AI development. The model achieves benchmark performance comparable to frontier proprietary systems like GPT-5 and Claude 4.5 Sonnet, while operating at significantly lower computational cost, primarily through the introduction of DeepSeek Sparse Attention. This novel attention mechanism dramatically reduces resource usage by retaining only the approximately 2,000 most relevant tokens, regardless of the total input length. DeepSeek 3.2 also introduces sophisticated training innovations, including an unprecedented allocation of its compute budget to reinforcement learning (RL), alongside techniques like mixed RL training and keep routing operations to maintain stability in its mixture-of-experts (MoE) architecture. The release is positioned as evidence that the AI industry is shifting from an "age of scaling" to an "age of research," prioritizing architectural efficiency over raw compute to achieve state-of-the-art results. The model’s known limitations, such as verbose output and reduced breadth of world knowledge, are also acknowledged in comparison to more extensively trained closed-source competitors.

Pas encore de commentaire