Model Library/GLM-4.7-Flash
zai-org/glm-4.7-flash

GLM-4.7-Flash

zai-org/glm-4.7-flash
GLM-4.7-Flash, a state-of-the-art model in the 30B class, delivers a compelling balance of high performance and efficiency. Tailored for Agentic Coding, it strengthens coding proficiency, long-horizon planning, and tool synergy, securing top-tier results on public benchmarks among similarly sized open-source models. It excels in complex agent tasks with superior instruction following for tool use, while significantly elevating the frontend aesthetics and completion efficiency of long-range workflows in Artifacts and Agentic Coding.

Features

Serverless API

Docs

zai-org/glm-4.7-flash is available via Novita's serverless API, where you pay per token. There are several ways to call the API, including OpenAI-compatible endpoints with exceptional reasoning performance.

Available Serverless

Run queries immediately, pay only for usage

Input$0.07 / M Tokens
Cache Read$0.01 / M Tokens
Output$0.4 / M Tokens

Use the following code examples to integrate with our API:

1from openai import OpenAI
2
3client = OpenAI(
4    api_key="<Your API Key>",
5    base_url="https://api.novita.ai/openai"
6)
7
8response = client.chat.completions.create(
9    model="zai-org/glm-4.7-flash",
10    messages=[
11        {"role": "system", "content": "You are a helpful assistant."},
12        {"role": "user", "content": "Hello, how are you?"}
13    ],
14    max_tokens=128000,
15    temperature=0.7
16)
17
18print(response.choices[0].message.content)

Info

Provider
Zai-org
Quantization
bf16

Supported Functionality

Context Length
200000
Max Output
128000
Serverless
Supported
Function Calling
Supported
Structured Output
Supported
Reasoning
Supported
Input Capabilities
text
Output Capabilities
text

Everything you need to build production AI.

200+ models, on-demand GPUs, and secure agent runtimes — unified under one API. Free to start, scales as you grow.