Model Library/Mistral Nemo
mistralai/mistral-nemo

Mistral Nemo

mistralai/mistral-nemo
A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi. It supports function calling and is released under the Apache 2.0 license.

Features

Serverless API

Docs

mistralai/mistral-nemo is available via Novita's serverless API, where you pay per token. There are several ways to call the API, including OpenAI-compatible endpoints with exceptional reasoning performance.

On-demand Deployments

Docs

On-demand deployments allow you to use mistralai/mistral-nemo on dedicated GPUs with high-performance serving stack with high reliability and no rate limits.

Available Serverless

Run queries immediately, pay only for usage

Input$0.04 / M Tokens
Output$0.17 / M Tokens

Use the following code examples to integrate with our API:

1from openai import OpenAI
2
3client = OpenAI(
4    api_key="<Your API Key>",
5    base_url="https://api.novita.ai/openai"
6)
7
8response = client.chat.completions.create(
9    model="mistralai/mistral-nemo",
10    messages=[
11        {"role": "system", "content": "You are a helpful assistant."},
12        {"role": "user", "content": "Hello, how are you?"}
13    ],
14    max_tokens=16000,
15    temperature=0.7
16)
17
18print(response.choices[0].message.content)

Info

Provider
Mistral
Quantization
fp8

Supported Functionality

Context Length
60288
Max Output
16000
Structured Output
Supported
Input Capabilities
text
Output Capabilities
text