Model Library/L3 70B Euryale V2.1
sao10k/l3-70b-euryale-v2.1

L3 70B Euryale V2.1

sao10k/l3-70b-euryale-v2.1
The uncensored llama3 model is a powerhouse of creativity, excelling in both roleplay and story writing. It offers a liberating experience during roleplays, free from any restrictions. This model stands out for its immense creativity, boasting a vast array of unique ideas and plots, truly a treasure trove for those seeking originality. Its unrestricted nature during roleplays allows for the full breadth of imagination to unfold, akin to an enhanced, big-brained version of Stheno. Perfect for creative minds seeking a boundless platform for their imaginative expressions, the uncensored llama3 model is an ideal choice

Features

Serverless API

Docs

sao10k/l3-70b-euryale-v2.1 is available via Novita's serverless API, where you pay per token. There are several ways to call the API, including OpenAI-compatible endpoints with exceptional reasoning performance.

On-demand Deployments

Docs

On-demand deployments allow you to use sao10k/l3-70b-euryale-v2.1 on dedicated GPUs with high-performance serving stack with high reliability and no rate limits.

Available Serverless

Run queries immediately, pay only for usage

Input$1.48 / M Tokens
Output$1.48 / M Tokens

Use the following code examples to integrate with our API:

1from openai import OpenAI
2
3client = OpenAI(
4    api_key="<Your API Key>",
5    base_url="https://api.novita.ai/openai"
6)
7
8response = client.chat.completions.create(
9    model="sao10k/l3-70b-euryale-v2.1",
10    messages=[
11        {"role": "system", "content": "You are a helpful assistant."},
12        {"role": "user", "content": "Hello, how are you?"}
13    ],
14    max_tokens=8192,
15    temperature=0.7
16)
17
18print(response.choices[0].message.content)

Info

Provider
Sao10K
Quantization
bf16

Supported Functionality

Context Length
8192
Max Output
8192
Function Calling
Supported
Input Capabilities
text
Output Capabilities
text