agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (LLM).
This helps developers fine-tune large language models (LLMs) using reinforcement learning. You provide an existing LLM and define an environment for it to interact with, and this framework helps train the model to perform better at specific tasks. It is for AI developers, machine learning engineers, and researchers who want to improve the performance of their LLMs.
557 stars. Actively maintained with 9 commits in the last 30 days. Available on PyPI.
Use this if you need to significantly improve an LLM's capability or agent's performance in a particular domain or task beyond general pre-training.
Not ideal if you're looking for a simple plug-and-play solution without deep understanding of reinforcement learning or model fine-tuning.
Stars
557
Forks
55
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 11, 2026
Commits (30d)
9
Dependencies
24
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/agentscope-ai/Trinity-RFT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related models
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.
PKU-Alignment/align-anything
Align Anything: Training All-modality Model with Feedback