131072 Context
$0.390 / 1M input tokens
$0.390 / 1M output tokens
Demo
API
Model Configuration
Response format
System Prompt
max_tokens
temperature
top_p
min_p
top_k
presence_penalty
frequency_penalty
repetition_penalty
README

Model Description

Nous-Hermes-Llama2-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Emozilla leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors.

This Hermes model uses the exact same dataset as Hermes on Llama-1. This is to ensure consistency between the old Hermes and new, for anyone who wanted to keep Hermes as similar to the old one, just more capable.

This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 4096 sequence length on an 8x a100 80GB DGX machine.

Model Training

The model was trained almost entirely on synthetic GPT-4 outputs. Curating high quality GPT-4 datasets enables incredibly high quality in knowledge, task completion, and style.

This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), and several others, detailed further below

Benchmark Results

AGI-Eval

1| Task |Version| Metric |Value | |Stderr| 2|agieval_aqua_rat | 0|acc |0.2362|± |0.0267| 3| | |acc_norm|0.2480|± |0.0272| 4|agieval_logiqa_en | 0|acc |0.3425|± |0.0186| 5| | |acc_norm|0.3472|± |0.0187| 6|agieval_lsat_ar | 0|acc |0.2522|± |0.0287| 7| | |acc_norm|0.2087|± |0.0269| 8|agieval_lsat_lr | 0|acc |0.3510|± |0.0212| 9| | |acc_norm|0.3627|± |0.0213| 10|agieval_lsat_rc | 0|acc |0.4647|± |0.0305| 11| | |acc_norm|0.4424|± |0.0303| 12|agieval_sat_en | 0|acc |0.6602|± |0.0331| 13| | |acc_norm|0.6165|± |0.0340| 14|agieval_sat_en_without_passage| 0|acc |0.4320|± |0.0346| 15| | |acc_norm|0.4272|± |0.0345| 16|agieval_sat_math | 0|acc |0.2909|± |0.0307| 17| | |acc_norm|0.2727|± |0.0301|

GPT-4All Benchmark Set

1| Task |Version| Metric |Value | |Stderr| 2|arc_challenge| 0|acc |0.5102|± |0.0146| 3| | |acc_norm|0.5213|± |0.0146| 4|arc_easy | 0|acc |0.7959|± |0.0083| 5| | |acc_norm|0.7567|± |0.0088| 6|boolq | 1|acc |0.8394|± |0.0064| 7|hellaswag | 0|acc |0.6164|± |0.0049| 8| | |acc_norm|0.8009|± |0.0040| 9|openbookqa | 0|acc |0.3580|± |0.0215| 10| | |acc_norm|0.4620|± |0.0223| 11|piqa | 0|acc |0.7992|± |0.0093| 12| | |acc_norm|0.8069|± |0.0092| 13|winogrande | 0|acc |0.7127|± |0.0127|

BigBench Reasoning Test

1| Task |Version| Metric |Value | |Stderr| 2 3|bigbench_causal_judgement | 0|multiple_choice_grade|0.5526|± |0.0362| 4|bigbench_date_understanding | 0|multiple_choice_grade|0.7344|± |0.0230| 5|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.2636|± |0.0275| 6|bigbench_geometric_shapes | 0|multiple_choice_grade|0.0195|± |0.0073| 7| | |exact_str_match |0.0000|± |0.0000| 8|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2760|± |0.0200| 9|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2100|± |0.0154| 10|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4400|± |0.0287| 11|bigbench_movie_recommendation | 0|multiple_choice_grade|0.2440|± |0.0192| 12|bigbench_navigate | 0|multiple_choice_grade|0.4950|± |0.0158| 13|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.5570|± |0.0111| 14|bigbench_ruin_names | 0|multiple_choice_grade|0.3728|± |0.0229| 15|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1854|± |0.0123| 16|bigbench_snarks | 0|multiple_choice_grade|0.6298|± |0.0360| 17|bigbench_sports_understanding | 0|multiple_choice_grade|0.6156|± |0.0155| 18|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3140|± |0.0147| 19|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2032|± |0.0114| 20|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1406|± |0.0083| 21|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4400|± |0.0287|

These are the highest benchmarks Hermes has seen on every metric, achieving the following average scores:

  • GPT4All benchmark average is now 70.0 - from 68.8 in Hermes-Llama1
  • 0.3657 on BigBench, up from 0.328 on hermes-llama1
  • 0.372 on AGIEval, up from 0.354 on Hermes-llama1

These benchmarks currently have us at #1 on ARC-c, ARC-e, Hellaswag, and OpenBookQA, and 2nd place on Winogrande, comparing to GPT4all's benchmarking list, supplanting Hermes 1 for the new top position.

How to use

You can choose 3 programming languages to access our nousresearch/nous-hermes-llama2-13b model.

HTTP/cURL

We provide compatibility with the OpenAI API standard

The API Base URL

1https://api.novita.ai/v3/openai

Example of Using Chat Completions API

Generate a response using a list of messages from a conversation

1# Get the Novita AI API Key by referring to: https://novita.ai/docs/get-started/quickstart.html#_2-manage-api-key 2export API_KEY="{YOUR Novita AI API Key}" 3 4curl "https://api.novita.ai/v3/openai/chat/completions" \ 5 -H "Content-Type: application/json" \ 6 -H "Authorization: Bearer ${API_KEY}" \ 7 -d '{ 8 "model": "nousresearch/nous-hermes-llama2-13b", 9 "messages": [ 10 { 11 "role": "system", 12 "content": "Act like you are a helpful assistant." 13 }, 14 { 15 "role": "user", 16 "content": "Hi there!" 17 } 18 ], 19 "max_tokens": 512 20}'

The response may look like this

1{ 2 "id": "chat-5f461a9a23a44ef29dbd3124b891afc0", 3 "object": "chat.completion", 4 "created": 1731584707, 5 "model": "nousresearch/nous-hermes-llama2-13b", 6 "choices": [ 7 { 8 "index": 0, 9 "message": { 10 "role": "assistant", 11 "content": "Hello! It's nice to meet you. How can I assist you today? Do you have any questions or topics you'd like to discuss? I'm here to help with anything you need." 12 }, 13 "finish_reason": "stop", 14 "content_filter_results": { 15 "hate": { "filtered": false }, 16 "self_harm": { "filtered": false }, 17 "sexual": { "filtered": false }, 18 "violence": { "filtered": false }, 19 "jailbreak": { "filtered": false, "detected": false }, 20 "profanity": { "filtered": false, "detected": false } 21 } 22 } 23 ], 24 "usage": { 25 "prompt_tokens": 46, 26 "completion_tokens": 40, 27 "total_tokens": 86, 28 "prompt_tokens_details": null, 29 "completion_tokens_details": null 30 }, 31 "system_fingerprint": "" 32}

If you want to receive a response via streaming, simply pass "stream": true in the request (see the difference on line 20). An example is provided.

1# Get the Novita AI API Key by referring to: https://novita.ai/docs/get-started/quickstart.html#_2-manage-api-key 2export API_KEY="{YOUR Novita AI API Key}" 3 4curl "https://api.novita.ai/v3/openai/chat/completions" \ 5 -H "Content-Type: application/json" \ 6 -H "Authorization: Bearer ${API_KEY}" \ 7 -d '{ 8 "model": "nousresearch/nous-hermes-llama2-13b", 9 "messages": [ 10 { 11 "role": "system", 12 "content": "Act like you are a helpful assistant." 13 }, 14 { 15 "role": "user", 16 "content": "Hi there!" 17 } 18 ], 19 "max_tokens": 512, 20 "stream": true 21}'

The response may look like this

1data: {"id":"chat-d821b951d6ff43ab838d18137aef7d0a","object":"chat.completion.chunk","created":1731586102,"model":"meta-llama/llama-3.1-8b-instruct","choices":[{"index":0,"delta":{"role":"assistant"},"finish_reason":null,"content_filter_results":{"hate":{"filtered":false},"self_harm":{"filtered":false},"sexual":{"filtered":false},"violence":{"filtered":false},"jailbreak":{"filtered":false,"detected":false},"profanity":{"filtered":false,"detected":false}}}],"system_fingerprint":""} 2 3... 4 5data: {"id":"chat-d821b951d6ff43ab838d18137aef7d0a","object":"chat.completion.chunk","created":1731586102,"model":"meta-llama/llama-3.1-8b-instruct","choices":[{"index":0,"delta":{"content":"n, ne"},"finish_reason":null,"content_filter_results":{"hate":{"filtered":false},"self_harm":{"filtered":false},"sexual":{"filtered":false},"violence":{"filtered":false},"jailbreak":{"filtered":false,"detected":false},"profanity":{"filtered":false,"detected":false}}}],"system_fingerprint":""} 6 7data: {"id":"chat-d821b951d6ff43ab838d18137aef7d0a","object":"chat.completion.chunk","created":1731586102,"model":"meta-llama/llama-3.1-8b-instruct","choices":[{"index":0,"delta":{"content":"ed"},"finish_reason":null,"content_filter_results":{"hate":{"filtered":false},"self_harm":{"filtered":false},"sexual":{"filtered":false},"violence":{"filtered":false},"jailbreak":{"filtered":false,"detected":false},"profanity":{"filtered":false,"detected":false}}}],"system_fingerprint":""} 8 9data: {"id":"chat-d821b951d6ff43ab838d18137aef7d0a","object":"chat.completion.chunk","created":1731586102,"model":"meta-llama/llama-3.1-8b-instruct","choices":[{"index":0,"delta":{"content":" assi"},"finish_reason":null,"content_filter_results":{"hate":{"filtered":false},"self_harm":{"filtered":false},"sexual":{"filtered":false},"violence":{"filtered":false},"jailbreak":{"filtered":false,"detected":false},"profanity":{"filtered":false,"detected":false}}}],"system_fingerprint":""} 10 11data: {"id":"chat-d821b951d6ff43ab838d18137aef7d0a","object":"chat.completion.chunk","created":1731586102,"model":"meta-llama/llama-3.1-8b-instruct","choices":[{"index":0,"delta":{"content":"s"},"finish_reason":null,"content_filter_results":{"hate":{"filtered":false},"self_harm":{"filtered":false},"sexual":{"filtered":false},"violence":{"filtered":false},"jailbreak":{"filtered":false,"detected":false},"profanity":{"filtered":false,"detected":false}}}],"system_fingerprint":""} 12 13data: {"id":"chat-d821b951d6ff43ab838d18137aef7d0a","object":"chat.completion.chunk","created":1731586102,"model":"meta-llama/llama-3.1-8b-instruct","choices":[{"index":0,"delta":{"content":"tan"},"finish_reason":null,"content_filter_results":{"hate":{"filtered":false},"self_harm":{"filtered":false},"sexual":{"filtered":false},"violence":{"filtered":false},"jailbreak":{"filtered":false,"detected":false},"profanity":{"filtered":false,"detected":false}}}],"system_fingerprint":""} 14 15data: {"id":"chat-d821b951d6ff43ab838d18137aef7d0a","object":"chat.completion.chunk","created":1731586102,"model":"meta-llama/llama-3.1-8b-instruct","choices":[{"index":0,"delta":{"content":"ce wi"},"finish_reason":null,"content_filter_results":{"hate":{"filtered":false},"self_harm":{"filtered":false},"sexual":{"filtered":false},"violence":{"filtered":false},"jailbreak":{"filtered":false,"detected":false},"profanity":{"filtered":false,"detected":false}}}],"system_fingerprint":""} 16 17... 18 19data: {"id":"chat-d821b951d6ff43ab838d18137aef7d0a","object":"chat.completion.chunk","created":1731586102,"model":"meta-llama/llama-3.1-8b-instruct","choices":[{"index":0,"delta":{"content":" "},"finish_reason":null,"content_filter_results":{"hate":{"filtered":false},"self_harm":{"filtered":false},"sexual":{"filtered":false},"violence":{"filtered":false},"jailbreak":{"filtered":false,"detected":false},"profanity":{"filtered":false,"detected":false}}}],"system_fingerprint":""} 20 21data: {"id":"chat-d821b951d6ff43ab838d18137aef7d0a","object":"chat.completion.chunk","created":1731586102,"model":"meta-llama/llama-3.1-8b-instruct","choices":[{"index":0,"delta":{"content":"just want to chat?"},"finish_reason":"stop","content_filter_results":{"hate":{"filtered":false},"self_harm":{"filtered":false},"sexual":{"filtered":false},"violence":{"filtered":false},"jailbreak":{"filtered":false,"detected":false},"profanity":{"filtered":false,"detected":false}}}],"system_fingerprint":""} 22 23data: [DONE]

Model Parameters

Feel free to check out our documentation for more details.

Python

First, install the official OpenAI Python client

1pip install 'openai>=1.0.0'

and then you can run inferences with us

Example of Using Chat Completions API

Generate a response using a list of messages from a conversation

1from openai import OpenAI 2 3client = OpenAI( 4 base_url="https://api.novita.ai/v3/openai", 5 # Get the Novita AI API Key by referring to: https://novita.ai/docs/get-started/quickstart.html#_2-manage-api-key. 6 api_key="<YOUR Novita AI API Key>", 7) 8 9model = "nousresearch/nous-hermes-llama2-13b" 10stream = True # or False 11max_tokens = 512 12 13chat_completion_res = client.chat.completions.create( 14 model=model, 15 messages=[ 16 { 17 "role": "system", 18 "content": "Act like you are a helpful assistant.", 19 }, 20 { 21 "role": "user", 22 "content": "Hi there!", 23 } 24 ], 25 stream=stream, 26 max_tokens=max_tokens, 27) 28 29if stream: 30 for chunk in chat_completion_res: 31 print(chunk.choices[0].delta.content or "") 32else: 33 print(chat_completion_res.choices[0].message.content)

If you set stream: true (line 10), the print may look like this

1It' 2s 3 ni 4ce to 5meet you. 6Is 7 the 8re so 9meth 10ing I 11 can h 12e 13lp 14you wi 15th t 16oday, 17 or 18 woul 19d 20 you like to chat?

If you don't want to receive a response via streaming, simply set stream: false. The output will look like this

1How can I assist you today? Do you have any questions or topics you'd like to discuss?

Model Parameters

Feel free to check out our documentation for more details.

JavaScript

First, install the official OpenAI JavaScript client

1npm install openai

and then you can run inferences with us in the browser or in node.js

Example of Using Chat Completions API

Generate a response using a list of messages from a conversation

1import OpenAI from "openai"; 2 3const openai = new OpenAI({ 4 baseURL: "https://api.novita.ai/v3/openai", 5 apiKey: "<YOUR Novita AI API Key>", 6}); 7const stream = true; // or false 8 9async function run() { 10 const completion = await openai.chat.completions.create({ 11 messages: [ 12 { 13 role: "system", 14 content: "Act like you are a helpful assistant.", 15 }, 16 { 17 role: "user", 18 content: "Hi there!" 19 } 20 ], 21 model: "nousresearch/nous-hermes-llama2-13b", 22 stream 23 }); 24 25 if (stream) { 26 for await (const chunk of completion) { 27 if (chunk.choices[0].finish_reason) { 28 console.log(chunk.choices[0].finish_reason); 29 } else { 30 console.log(chunk.choices[0].delta.content); 31 } 32 } 33 } else { 34 console.log(JSON.stringify(completion)); 35 } 36} 37 38run();

If you set stream: true (line 7), the print may look like this

1It' 2s 3 nic 4e to 5 m 6eet you 7. Ho 8w can 9I 10 as 11sist 12 you 13toda 14y? Do you 15hav 16e any q 17uest 18io 19ns or 20 to 21pics you 22' 23d 24li 25ke to 26 di 27scuss 28stop

If you don't want to receive a response via streaming, simply set stream: false. The output will look like this

1{ 2 "id": "chat-a3ff0e39b4c24abcbd258ab1a1f38db9", 3 "object": "chat.completion", 4 "created": 1731642457, 5 "model": "nousresearch/nous-hermes-llama2-13b", 6 "choices": [ 7 { 8 "index": 0, 9 "message": { 10 "role": "assistant", 11 "content": "How can I help you today? Would you like to talk about something specific or just have a chat? I'm here to assist you with any questions or information you might need." 12 }, 13 "finish_reason": "stop", 14 "content_filter_results": { 15 "hate": { "filtered": false }, 16 "self_harm": { "filtered": false }, 17 "sexual": { "filtered": false }, 18 "violence": { "filtered": false }, 19 "jailbreak": { "filtered": false, "detected": false }, 20 "profanity": { "filtered": false, "detected": false } 21 } 22 } 23 ], 24 "usage": { 25 "prompt_tokens": 46, 26 "completion_tokens": 37, 27 "total_tokens": 83, 28 "prompt_tokens_details": null, 29 "completion_tokens_details": null 30 }, 31 "system_fingerprint": "" 32}

Model Parameters

Feel free to check out our documentation for more details.