sastpg/RFTT

RFTT: Reasoning with Reinforced Functional Token Tuning

25
/ 100
Experimental

This project helps machine learning engineers and researchers improve how large language models (LLMs) think through complex problems, especially in mathematics. It takes an existing LLM and trains it to use specific 'thinking steps' like or to construct its own solutions. The output is a more capable LLM that can achieve higher accuracy on reasoning tasks without needing elaborate prompts.

Use this if you are a machine learning engineer or researcher looking to significantly boost the problem-solving and reasoning abilities of large language models on complex, multi-step tasks like advanced mathematics.

Not ideal if you are looking for an off-the-shelf solution for general text generation or if you don't have experience fine-tuning and training large language models.

Large Language Models AI Research Model Fine-tuning Reasoning Systems Machine Learning Engineering
No License No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

29

Forks

Language

Python

License

Last pushed

Feb 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/sastpg/RFTT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.