Model Library/Qwen3 Coder 30b A3B Instruct
qwen/qwen3-coder-30b-a3b-instruct

Qwen3 Coder 30b A3B Instruct

qwen/qwen3-coder-30b-a3b-instruct
Qwen3-Coder-30B-A3B-Instruct is a 30.5B parameter Mixture-of-Experts (MoE) model with 128 experts (8 active per forward pass), designed for advanced code generation, repository-scale understanding, and agentic tool use. Built on the Qwen3 architecture, it supports a native context length of 256K tokens (extendable to 1M with Yarn) and performs strongly in tasks involving function calls, browser use, and structured code completion. This model is optimized for instruction-following without “thinking mode”, and integrates well with OpenAI-compatible tool-use formats.

Features

Serverless API

Docs

qwen/qwen3-coder-30b-a3b-instruct is available via Novita's serverless API, where you pay per token. There are several ways to call the API, including OpenAI-compatible endpoints with exceptional reasoning performance.

On-demand Deployments

Docs

On-demand deployments allow you to use qwen/qwen3-coder-30b-a3b-instruct on dedicated GPUs with high-performance serving stack with high reliability and no rate limits.

Available Serverless

Run queries immediately, pay only for usage

Input$0.07 / M Tokens
Output$0.27 / M Tokens

Use the following code examples to integrate with our API:

1from openai import OpenAI
2
3client = OpenAI(
4    api_key="<Your API Key>",
5    base_url="https://api.novita.ai/openai"
6)
7
8response = client.chat.completions.create(
9    model="qwen/qwen3-coder-30b-a3b-instruct",
10    messages=[
11        {"role": "system", "content": "You are a helpful assistant."},
12        {"role": "user", "content": "Hello, how are you?"}
13    ],
14    max_tokens=32768,
15    temperature=0.7
16)
17
18print(response.choices[0].message.content)

Info

Provider
Qwen
Quantization
fp8

Supported Functionality

Context Length
262144
Max Output
32768
Structured Output
Supported
Function Calling
Supported
Input Capabilities
text
Output Capabilities
text