Model Library/Mythomax L2 13B
gryphe/mythomax-l2-13b

Mythomax L2 13B

gryphe/mythomax-l2-13b
The idea behind this merge is that each layer is composed of several tensors, which are in turn responsible for specific functions. Using MythoLogic-L2's robust understanding as its input and Huginn's extensive writing capability as its output seems to have resulted in a model that exceeds at both, confirming my theory. (More details to be released at a later time).

Features

On-demand Deployments

Docs

On-demand deployments allow you to use gryphe/mythomax-l2-13b on dedicated GPUs with high-performance serving stack with high reliability and no rate limits.

Info

Provider
Mythomax
Quantization
fp16

Supported Functionality

Context Length
4096
Max Output
3200
Serverless
Not supported
Input Capabilities
text
Output Capabilities
text