The idea behind this merge is that each layer is composed of several tensors, which are in turn responsible for specific functions. Using MythoLogic-L2's robust understanding as its input and Huginn's extensive writing capability as its output seems to have resulted in a model that exceeds at both, confirming my theory. (More details to be released at a later time).
Features
On-demand Deployments
Docs
On-demand deployments allow you to use gryphe/mythomax-l2-13b on dedicated GPUs with high-performance serving stack with high reliability and no rate limits.