Axolotl

Accelerate AI Training with Axolotl on Novita AI

Framework
axolotl ai cloud2024-08-29Github

One click deployment

On Demand
README

Run Axolotl on Novita AI

GitHub List: Novita AI Templates Catalogue

What is Axolotl?

Axolotl is an AI model fine-tuning tool designed to streamline the fine-tuning process of various AI models, supporting multiple configurations and architectures.

How does Axolotl Work?

Axolotl works by providing a comprehensive and flexible framework for fine-tuning AI models. It abstracts away much of the complexity involved in training, allowing researchers and developers to focus on experimentation and optimization. Users can leverage its features to efficiently train and deploy state-of-the-art models across different platforms and configurations.

What does Axolotl Support?

Here is the graph listing what Axolotl supports. You can learn more about its function and flexibility through this graph.

https://imagedelivery.net/GFvwKVAtCfKnMHdvDobR4A/998288f2-4275-4ac7-3eac-a32f069d5200/public

Key Features and Benefits of Axolotl

Key Features

  • Different Models Support: Support for training various Huggingface models, including llama, pythia, falcon, mpt.

  • Fine-Tuning Methods: Support for diverse fine-tuning techniques such as fullfinetune, lora, qlora, relora, and gptq.

  • Customizable Configurations: Easily customize configurations using simple simple yaml files or CLI overwrite.

  • Flexible Dataset Handling: Compability to Load different dataset formats, use custom formats, or bring your own tokenized datasets.

  • Integration for Enhanced Performance: Integrate with xformer, flash attention, rope scaling, and multipacking for enhanced performance.

Benefits

  • Scalability: Axolotl allows for easy scaling of resources to accommodate varying workloads, making it suitable for both small and large-scale applications.

  • Cost Efficiency: By leveraging GPU resources in the cloud, users can optimize costs by only paying for what they use, avoiding the need for expensive on-premises hardware.

  • Performance: Axolotl is designed to maximize the performance of GPU resources, enabling faster processing times for data-intensive tasks such as machine learning and simulations.

  • Flexibility: Users can easily configure and customize their environments to meet specific needs, allowing for a wide range of applications from research to production.

Who will Use Axolotl?

  • AI Researchers and Data Scientists: Individuals who are focused on developing, fine-tuning, and experimenting with large-scale machine learning models. Axolotl provides them with the flexibility and tools needed to efficiently train and optimize these models.

  • Machine Learning Engineers: Professionals who need a scalable and efficient solution to train and deploy machine learning models. Axolotl's support for distributed training and acceleration makes it ideal for engineers looking to streamline their workflow.

  • Startups and Tech Companies: Organizations that are building AI-driven products and services. Axolotl helps these companies accelerate their AI development process, reduce costs, and improve the performance of their AI models by leveraging advanced training techniques.

  • Open-Source Contributors: Developers and AI enthusiasts who are interested in contributing to an open-source AI training framework. Axolotl's open-source nature encourages collaboration and innovation within the community.

  • Academic Institutions: Universities and research institutions that require a robust and adaptable platform for teaching and research in AI and machine learning. Axolotl's flexibility makes it a valuable resource for educational purposes.

How to Use Axolotl

Quick Start

  • Required Libraries: Installation of necessary libraries (Python, preferably version 3.10 or higher; Pytorch, 2.1.1 or higher) and dependencies, as specified in the Axolotl documentation.
1git clone https://github.com/axolotl-ai-cloud/axolotl 2cd axolotl 3 4pip3 install packaging ninja 5pip3 install -e '.[flash-attn,deepspeed]'
  • Usage:
1# preprocess datasets - optional but recommended 2CUDA_VISIBLE_DEVICES="" python -m axolotl.cli.preprocess examples/openllama-3b/lora.yml 3 4# finetune lora 5accelerate launch -m axolotl.cli.train examples/openllama-3b/lora.yml 6 7# inference 8accelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \ 9 --lora_model_dir="./outputs/lora-out" 10 11# gradio 12accelerate launch -m axolotl.cli.inference examples/openllama-3b/lora.yml \ 13 --lora_model_dir="./outputs/lora-out" --gradio 14 15# remote yaml files - the yaml config can be hosted on a public URL 16# Note: the yaml config must directly link to the **raw** yaml 17accelerate launch -m axolotl.cli.train https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/examples/openllama-3b/lora.yml

Advanced Tips

  • Considering Objectives: Before fine-tuning Axolotl, define your objectives clearly. These could include enhancing performance on specific tasks, adapting to a domain, or customizing output style.

  • Cloud GPU: For cloud GPU providers that support docker images, use [winglian/axolotl-cloud:main-latest](https://hub.docker.com/r/winglian/axolotl-cloud/tags)

  • Dataset: Dataset curation is a critical step often overlooked in many guides or tutorials. Axolotl supports various dataset formats, with JSONL recommended. The JSONL schema depends on the task and prompt template. Alternatively, you can use a HuggingFace dataset with columns for each JSONL field. See the documentation for more information on how to use different dataset formats.

Run on Novita AI

Axolotl is supported by Novita AI template. Get started quickly with a simple setup, no installation is required. Focus on building and scaling your models effortlessly. Novita AI streamlines your AI workflow for optimal performance. Try our template today and harness the power of Axolotl on Novita AI!

Common Errors and Solutions

  • Error 1: The trainer stopped and hasn't progressed in several minutes.

  • Solution 1: Usually an issue with the GPUs communicating with each other. See the [NCCL doc] (nccl.qmd).

  • Error 2: Exitcode -7 while using deepspeed

  • Solution 2: Try upgrading deepspeed w: pip install -U deepspeed

  • Error 3: AttributeError: 'DummyOptim' object has no attribute 'step'

  • Solution 3: You may be using deepspeed with single gpu. Please don't set deepspeed: in yaml or cli.

For the full debugging guide, you can watch this YouTube video.

FAQs

What types of AI model fine-tuning methods does Axolotl support?

Axolotl supports various fine-tuning methods including fullfinetune, lora, qlora, relora, and gptq.

How can users customize configurations when fine-tuning with Axolotl?

Users can customize configurations through simple yaml files or command-line interface (CLI).

Does Axolotl support running in different hardware environments?

Yes, Axolotl supports running on single or multiple GPUs and can be easily run locally or in the cloud through Docker.

What dataset formats does Axolotl support?

Axolotl supports various dataset formats including JSONL. It also supports using custom formats or bringing your own tokenized datasets.

What advanced features or integrations does Axolotl offer?

Answer: Axolotl integrates with xformer, flash attention, rope scaling, and multipacking, and supports multi-GPU training through FSDP or Deepspeed.

How to perform data preprocessing in Axolotl?

Data preprocessing can be done using the command python -m axolotl.cli.preprocess along with a yaml configuration file.

Does Axolotl support recording the training process and results?

Yes, Axolotl supports logging results and optional checkpoints to wandb or mlflow.

License

Apache-2.0

View on Github

Source Site: https://github.com/axolotl-ai-cloud/axolotl

Excellent Collaboration Opportunity with Novita AI

We are dedicated to providing collaboration opportunities for developers.

enter image description here

Get in Touch:


Novita AI is the All-in-one cloud platform that empowers your AI ambitions. Integrated APIs, serverless, GPU Instance — the cost-effective tools you need. Eliminate infrastructure, start free, and make your AI vision a reality.

Other Recommended Templates

Stable Diffusion v1.8.0

Unlock the power of creativity with Stable Diffusion v1.8.0

View more

PyTorch v2.2.1

Elevate Your AI Models with PyTorch v2.2.1 on Novita AI

View more

TensorFlow 2.7.0

Effortless AI and ML workflow with TensorFlow 2.7.0 on Novita AI.

View more

Ollama Open WebUI

Streamline Your AI Workflows with Ollama Open WebUI

View more

Meta Llama 3.1 8B Instruct

Accelerate AI Innovation with Meta Llama 3.1 8B Instruct, Powered by Novita AI

View more
Join Our Community

Join Discord to connect with other users and share your experiences. Provide feedback on any issues, and suggest new templates you'd like to see added.

Join Discord