Model Library/MiniMax M2.7
minimax/minimax-m2.7

MiniMax M2.7

minimax/minimax-m2.7
MiniMax M2.7 is an all-around evolved, versatile open-source large language model that seamlessly blends hardcore engineering productivity with high-EQ, human-like interaction capabilities. In real-world software engineering, M2.7 excels by independently driving end-to-end project delivery while efficiently handling advanced tasks such as log analysis, bug troubleshooting, code security, and machine learning. In the professional workspace, it boasts the highest open-source GDPval-AA score (1495 ELO). It delivers high-fidelity, complex editing and multi-turn revisions across the Office suite (Excel, PPT, Word), elevating task execution to industry-leading standards. Built for complex environment interactions, M2.7 maintains an impressive 97% skill-following rate even with complex, long-context tool calls (>2000 tokens). Beyond its robust productivity, M2.7 breaks the "cold tool" stereotype of traditional models. With exceptional identity retention and high emotional intelligence (EQ), it not only empowers ent

Features

Serverless API

Docs

minimax/minimax-m2.7 is available via Novita's serverless API, where you pay per token. There are several ways to call the API, including OpenAI-compatible endpoints with exceptional reasoning performance.

Available Serverless

Run queries immediately, pay only for usage

Input$0.3 / M Tokens
Cache Read$0.06 / M Tokens
Output$1.2 / M Tokens

Use the following code examples to integrate with our API:

1from openai import OpenAI
2
3client = OpenAI(
4    api_key="<Your API Key>",
5    base_url="https://api.novita.ai/openai"
6)
7
8response = client.chat.completions.create(
9    model="minimax/minimax-m2.7",
10    messages=[
11        {"role": "system", "content": "You are a helpful assistant."},
12        {"role": "user", "content": "Hello, how are you?"}
13    ],
14    max_tokens=131072,
15    temperature=0.7
16)
17
18print(response.choices[0].message.content)

Info

Provider
MiniMax
Quantization
fp8

Supported Functionality

Context Length
204800
Max Output
131072
Serverless
Supported
Function Calling
Supported
Structured Output
Supported
Reasoning
Supported
Anthropic API
Supported
Input Capabilities
text
Output Capabilities
text