Hugging Face TRL Projects .

Technology

Hugging Face TRL

TRL (Transformer Reinforcement Learning) simplifies post-training of language models using advanced RL and alignment methods like DPO, PPO, and SFT.

TRL is the full-stack library for post-training foundation models (LLMs), built directly on the Hugging Face `transformers` ecosystem. It provides a suite of dedicated trainers: use `SFTTrainer` for Supervised Fine-Tuning, `RewardTrainer` for preference modeling, and `DPOTrainer` or `PPOTrainer` for core Reinforcement Learning (RL) alignment methods. The library is engineered for efficiency and scale: it integrates with `PEFT` (Parameter-Efficient Fine-Tuning) for memory-conscious training (LoRA/QLoRA) and leverages `Accelerate` to scale training across single GPUs to multi-node clusters.

https://github.com/huggingface/trl
2 projects · 2 cities

Related technologies

Recent Talks & Demos

Showing 1-2 of 2

Members-Only

Sign in to see who built these projects