NVlabs/NFT
Implementation of Negative-aware Finetuning (NFT) algorithm for "Bridging Supervised Learning and Reinforcement Learning in Math Reasoning"
This project helps large language models (LLMs) get better at solving math problems. It takes a base LLM and math questions, along with their correct and incorrect answers, to produce a more accurate and reliable math-reasoning LLM. This is useful for AI researchers and developers who are building or evaluating LLMs for tasks requiring strong mathematical capabilities.
No commits in the last 6 months.
Use this if you need to significantly improve a large language model's ability to solve math reasoning problems using a supervised learning approach.
Not ideal if your goal is to train an LLM for creative writing or general knowledge tasks, as this is specifically tailored for mathematical reasoning.
Stars
71
Forks
5
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 08, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/NVlabs/NFT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.