Model Library/DeepSeek R1 Distill Qwen 14B
deepseek/deepseek-r1-distill-qwen-14b

DeepSeek R1 Distill Qwen 14B

deepseek/deepseek-r1-distill-qwen-14b
DeepSeek R1 Distill Qwen 14B is a distilled large language model based on Qwen 2.5 14B, using outputs from DeepSeek R1. It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models. Other benchmark results include: AIME 2024 pass@1: 69.7 MATH-500 pass@1: 93.9 CodeForces Rating: 1481 The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

Features

On-demand Deployments

Docs

On-demand deployments allow you to use deepseek/deepseek-r1-distill-qwen-14b on dedicated GPUs with high-performance serving stack with high reliability and no rate limits.

Info

Provider
DeepSeek
Quantization
bf16

Supported Functionality

Context Length
32768
Max Output
16384
Serverless
Not supported
Input Capabilities
text
Output Capabilities
text