Model Library/ERNIE-4.5-VL-28B-A3B-Thinking
Wenxin

ERNIE-4.5-VL-28B-A3B-Thinking

baidu/ernie-4.5-vl-28b-a3b-thinking
Built upon the powerful ERNIE-4.5-VL-28B-A3B architecture, the newly upgraded ERNIE-4.5-VL-28B-A3B-Thinking achieves a remarkable leap forward in multimodal reasoning capabilities. 🧠✨ Through an extensive mid-training phase, the model absorbed a vast and highly diverse corpus of premium visual-language reasoning data. This massive-scale training process dramatically boosted the model’s representation power while deepening the semantic alignment between visual and language modalities—unlocking unprecedented capabilities in nuanced visual-textual reasoning. 📊 The model leverages cutting-edge multimodal reinforcement learning techniques on verifiable tasks, integrating GSPO and IcePop strategies to stabilize MoE training combined with dynamic difficulty sampling for exceptional learning efficiency. ⚡ Responding to strong community demand, we’ve significantly strengthened the model’s grounding performance with improved instruction-following capabilities, making visual grounding functions more accessible than eve

Features

Serverless API

Docs

baidu/ernie-4.5-vl-28b-a3b-thinking is available via Novita's serverless API, where you pay per token. There are several ways to call the API, including OpenAI-compatible endpoints with exceptional reasoning performance.

Available Serverless

Run queries immediately, pay only for usage

Input$0.39 / M Tokens
Output$0.39 / M Tokens

Use the following code examples to integrate with our API:

1from openai import OpenAI
2
3client = OpenAI(
4    api_key="<Your API Key>",
5    base_url="https://api.novita.ai/openai"
6)
7
8response = client.chat.completions.create(
9    model="baidu/ernie-4.5-vl-28b-a3b-thinking",
10    messages=[
11        {"role": "system", "content": "You are a helpful assistant."},
12        {"role": "user", "content": "Hello, how are you?"}
13    ],
14    max_tokens=65536,
15    temperature=0.7
16)
17
18print(response.choices[0].message.content)

Info

Provider
BAIDU
Quantization
fp16

Supported Functionality

Context Length
131072
Max Output
65536
Serverless
Supported
Function Calling
Supported
Reasoning
Supported
Structured Output
Supported
Input Capabilities
text, image, video
Output Capabilities
text