Model Library/Qwen3.5-122B-A10B
qwen/qwen3.5-122b-a10b

Qwen3.5-122B-A10B

qwen/qwen3.5-122b-a10b
The Qwen3.5-122B-A10B native vision-language model is built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. In terms of overall performance, this model is second only to Qwen3.5-397B-A17B. Its text capabilities significantly outperform those of Qwen3-235B-2507, and its visual capabilities surpass those of Qwen3-VL-235B.

Features

Serverless API

Docs

qwen/qwen3.5-122b-a10b is available via Novita's serverless API, where you pay per token. There are several ways to call the API, including OpenAI-compatible endpoints with exceptional reasoning performance.

Available Serverless

Run queries immediately, pay only for usage

Input$0.4 / M Tokens
Output$3.2 / M Tokens

Use the following code examples to integrate with our API:

1from openai import OpenAI
2
3client = OpenAI(
4    api_key="<Your API Key>",
5    base_url="https://api.novita.ai/openai"
6)
7
8response = client.chat.completions.create(
9    model="qwen/qwen3.5-122b-a10b",
10    messages=[
11        {"role": "system", "content": "You are a helpful assistant."},
12        {"role": "user", "content": "Hello, how are you?"}
13    ],
14    max_tokens=65536,
15    temperature=0.7
16)
17
18print(response.choices[0].message.content)

Info

Provider
Qwen
Quantization
bf16

Supported Functionality

Context Length
262144
Max Output
65536
Serverless
Supported
Function Calling
Supported
Structured Output
Supported
Reasoning
Supported
Input Capabilities
text, image, video
Output Capabilities
text