Model Library/Qwen3 8B
qwen/qwen3-8b-fp8

Qwen3 8B

qwen/qwen3-8b-fp8
Achieves effective integration of reasoning and non-reasoning modes, allowing seamless mode switching during conversations. Its reasoning capability reaches state-of-the-art (SOTA) performance among models of the same scale, and its general capabilities significantly outperform those of Qwen2.5-7B.

Features

On-demand Deployments

Docs

On-demand deployments allow you to use qwen/qwen3-8b-fp8 on dedicated GPUs with high-performance serving stack with high reliability and no rate limits.

Info

Provider
Qwen
Quantization
fp8

Supported Functionality

Context Length
128000
Max Output
20000
Serverless
Not supported
Input Capabilities
text
Output Capabilities
text