Don't show again
Back to Templates Library
Can't be used normally? Clickhereto let us know.
LLaMA-Factory
Favorite
Copy Link
hiyouga
hiyouga/llamafactory:latest
Updated time: 28 May 2025
README
Configuration
LLaMA Factory is a simple, efficient platform for training and fine-tuning Large Language Models (LLMs). With LLaMA Factory you can fine-tune hundreds of pretrained models locally without writing a single line of code. Key features include:
- Supported model families: LLaMA, LLaVA, Mistral, Mixtral-MoE, Qwen, Yi, Gemma, Baichuan, ChatGLM, Phi, and many more.
- Training algorithms: (Incremental) pre-training, (multimodal) instruction-tuned fine-tuning, reward-model training, PPO, DPO, KTO, ORPO, etc.
- Precision options: 16-bit full-parameter fine-tuning, frozen-parameter tuning, LoRA tuning, and 2/3/4/5/6/8-bit QLoRA tuning based on AQLM, AWQ, GPTQ, LLM.int8, HQQ, or EETQ.
- Optimization techniques: GaLore, BAdam, DoRA, LongLoRA, LLaMA Pro, Mixture-of-Depths, LoRA+, LoftQ, PiSSA.
- Acceleration kernels: FlashAttention-2 and Unsloth.
- Inference engines: Transformers and vLLM.
- Experiment tracking: LlamaBoard, TensorBoard, Weights & Biases, MLflow, SwanLab, and more.
By using this template, you can start the WebUI immediately and fine-tune models directly through an interactive interface — no coding required.