NVlabs/NFT

Implementation of Negative-aware Finetuning (NFT) algorithm for "Bridging Supervised Learning and Reinforcement Learning in Math Reasoning"

35
/ 100
Emerging

This project helps large language models (LLMs) get better at solving math problems. It takes a base LLM and math questions, along with their correct and incorrect answers, to produce a more accurate and reliable math-reasoning LLM. This is useful for AI researchers and developers who are building or evaluating LLMs for tasks requiring strong mathematical capabilities.

No commits in the last 6 months.

Use this if you need to significantly improve a large language model's ability to solve math reasoning problems using a supervised learning approach.

Not ideal if your goal is to train an LLM for creative writing or general knowledge tasks, as this is specifically tailored for mathematical reasoning.

AI research math education technology language model training computational reasoning machine learning engineering
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 9 / 25
Maturity 15 / 25
Community 9 / 25

How are scores calculated?

Stars

71

Forks

5

Language

Python

License

Apache-2.0

Last pushed

Sep 08, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/NVlabs/NFT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.