Model Library/Llama 3.3 70B Instruct
meta-llama/llama-3.3-70b-instruct

Llama 3.3 70B Instruct

meta-llama/llama-3.3-70b-instruct
The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimized for multilingual dialogue use cases and outperforms many of the available open source and closed chat models on common industry benchmarks. Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

Features

Serverless API

Docs

meta-llama/llama-3.3-70b-instruct is available via Novita's serverless API, where you pay per token. There are several ways to call the API, including OpenAI-compatible endpoints with exceptional reasoning performance.

On-demand Deployments

Docs

On-demand deployments allow you to use meta-llama/llama-3.3-70b-instruct on dedicated GPUs with high-performance serving stack with high reliability and no rate limits.

Available Serverless

Run queries immediately, pay only for usage

Input$0.13 / M Tokens
Output$0.39 / M Tokens

Use the following code examples to integrate with our API:

1from openai import OpenAI
2
3client = OpenAI(
4    api_key="<Your API Key>",
5    base_url="https://api.novita.ai/openai"
6)
7
8response = client.chat.completions.create(
9    model="meta-llama/llama-3.3-70b-instruct",
10    messages=[
11        {"role": "system", "content": "You are a helpful assistant."},
12        {"role": "user", "content": "Hello, how are you?"}
13    ],
14    max_tokens=120000,
15    temperature=0.7
16)
17
18print(response.choices[0].message.content)

Info

Provider
Llama
Quantization
bf16

Supported Functionality

Context Length
131072
Max Output
120000
Function Calling
Supported
Input Capabilities
text
Output Capabilities
text