Model Library/Deepseek Prover V2 671B
deepseek/deepseek-prover-v2-671b

Deepseek Prover V2 671B

deepseek/deepseek-prover-v2-671b
DeepSeek Launches Open-Source Model DeepSeek-Prover-V2-671B, Specializing in Mathematical Theorem Proving The new model employs a Mixture of Experts (MoE) architecture and is trained using the Lean 4 framework for formal reasoning. With 671 billion parameters, it leverages reinforcement learning and large-scale synthetic data to significantly enhance automated theorem-proving capabilities.

Features

Serverless API

Docs

deepseek/deepseek-prover-v2-671b is available via Novita's serverless API, where you pay per token. There are several ways to call the API, including OpenAI-compatible endpoints with exceptional reasoning performance.

On-demand Deployments

Docs

On-demand deployments allow you to use deepseek/deepseek-prover-v2-671b on dedicated GPUs with high-performance serving stack with high reliability and no rate limits.

Available Serverless

Run queries immediately, pay only for usage

Input$0.7 / M Tokens
Output$2.5 / M Tokens

Use the following code examples to integrate with our API:

1from openai import OpenAI
2
3client = OpenAI(
4    api_key="<Your API Key>",
5    base_url="https://api.novita.ai/openai"
6)
7
8response = client.chat.completions.create(
9    model="deepseek/deepseek-prover-v2-671b",
10    messages=[
11        {"role": "system", "content": "You are a helpful assistant."},
12        {"role": "user", "content": "Hello, how are you?"}
13    ],
14    max_tokens=160000,
15    temperature=0.7
16)
17
18print(response.choices[0].message.content)

Info

Provider
DeepSeek
Quantization
fp8

Supported Functionality

Context Length
160000
Max Output
160000
Input Capabilities
text
Output Capabilities
text