Model Library/Qwen3 235B A22b Thinking 2507
qwen/qwen3-235b-a22b-thinking-2507

Qwen3 235B A22b Thinking 2507

qwen/qwen3-235b-a22b-thinking-2507
The Qwen3-235B-A22B-Thinking-2507 represents the newest thinking-enabled model in the Qwen3 series, delivering groundbreaking improvements in reasoning capabilities. This advanced AI demonstrates significantly enhanced performance across logical reasoning, mathematics, scientific analysis, coding tasks, and academic benchmarks - matching or even surpassing human-expert level performance to achieve state-of-the-art results among open-source thinking models. Beyond its exceptional reasoning skills, the model shows markedly better general capabilities including more precise instruction following, sophisticated tool usage, highly natural text generation, and improved alignment with human preferences. It also features enhanced 256K long-context understanding, allowing it to maintain coherence and depth across extended documents and complex discussions.

Features

Serverless API

Docs

qwen/qwen3-235b-a22b-thinking-2507 is available via Novita's serverless API, where you pay per token. There are several ways to call the API, including OpenAI-compatible endpoints with exceptional reasoning performance.

On-demand Deployments

Docs

On-demand deployments allow you to use qwen/qwen3-235b-a22b-thinking-2507 on dedicated GPUs with high-performance serving stack with high reliability and no rate limits.

Available Serverless

Run queries immediately, pay only for usage

$0.3/$3
Per 1M Tokens (input/output)

Use the following code examples to integrate with our API:

1from openai import OpenAI
2
3client = OpenAI(
4    api_key="<Your API Key>",
5    base_url="https://api.novita.ai/openai"
6)
7
8response = client.chat.completions.create(
9    model="qwen/qwen3-235b-a22b-thinking-2507",
10    messages=[
11        {"role": "system", "content": "You are a helpful assistant."},
12        {"role": "user", "content": "Hello, how are you?"}
13    ],
14    max_tokens=32768,
15    temperature=0.7
16)
17
18print(response.choices[0].message.content)

Info

Provider
Qwen
Quantization
fp8

Supported Functionality

Context Length
131072
Max Output
32768
Function Calling
Supported
Structured Output
Supported
Reasoning
Supported
Input Capabilities
text
Output Capabilities
text