π DeepSeek-R1-Lite-Preview is now live: unleashing supercharged reasoning power!
π o1-preview-level performance on AIME & MATH benchmarks.
π‘ Transparent thought process in real-time.
π οΈ Open-source models & API coming soon!
Introducing DeepSeek-V3.1: our first step toward the agent era! π
π§ Hybrid inference: Think & Non-Think β one model, two modes
β‘οΈ Faster thinking: DeepSeek-V3.1-Think reaches answers in less time vs. DeepSeek-R1-0528
π οΈ Stronger agent skills: Post-training boosts tool use and multi-step agent tasks
Try it now β toggle Think/Non-Think via the "DeepThink" button: chat.deepseek.com
1/5
API Update βοΈ
πΉ deepseek-chat β non-thinking mode
πΉ deepseek-reasoner β thinking mode
π§΅ 128K context for both
π Anthropic API format supported: api-docs.deepseek.com/guides/anthropβ¦
β Strict Function Calling supported in Beta API: api-docs.deepseek.com/guides/functioβ¦
π More API resources, smoother API experience
2/5
Tools & Agents Upgrades π§°
π Better results on SWE / Terminal-Bench
π Stronger multi-step reasoning for complex search tasks
β‘οΈ Big gains in thinking efficiency
π¬ Distilled from DeepSeek-R1, 6 small models fully open-sourced
π 32B & 70B models on par with OpenAI-o1-mini
π€ Empowering the open-source community
π Pushing the boundaries of **open AI**!
π 2/n
π License Update!
π DeepSeek-R1 is now MIT licensed for clear open access
π Open for the community to leverage model weights & outputs
π οΈ API outputs can now be used for fine-tuning & distillation
π Exciting news! Weβve officially launched DeepSeek-V2.5 β a powerful combination of DeepSeek-V2-0628 and DeepSeek-Coder-V2-0724! Now, with enhanced writing, instruction-following, and human preference alignment, itβs available on Web and API. Enjoy seamless Function Calling, FIM, and Json Output all-in-one!
Note: Due to significant updates in this version, if performance drops in certain cases, we recommend adjusting the system prompt and temperature settings for the best results!
DeepSeek-V2.5 outperforms both DeepSeek-V2-0628 and DeepSeek-Coder-V2-0724 on most benchmarks.
In our internal Chinese evaluations, DeepSeek-V2.5 shows a significant improvement in win rates against GPT-4o mini and ChatGPT-4o-latest (judged by GPT-4o) compared to DeepSeek-V2-0628.