sastpg/RFTT
RFTT: Reasoning with Reinforced Functional Token Tuning
This project helps machine learning engineers and researchers improve how large language models (LLMs) think through complex problems, especially in mathematics. It takes an existing LLM and trains it to use specific 'thinking steps' like
Use this if you are a machine learning engineer or researcher looking to significantly boost the problem-solving and reasoning abilities of large language models on complex, multi-step tasks like advanced mathematics.
Not ideal if you are looking for an off-the-shelf solution for general text generation or if you don't have experience fine-tuning and training large language models.
Stars
29
Forks
—
Language
Python
License
—
Category
Last pushed
Feb 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/sastpg/RFTT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cvs-health/uqlm
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM...
PRIME-RL/TTRL
[NeurIPS 2025] TTRL: Test-Time Reinforcement Learning
sapientinc/HRM
Hierarchical Reasoning Model Official Release
tigerchen52/query_level_uncertainty
query-level uncertainty in LLMs
reasoning-survey/Awesome-Reasoning-Foundation-Models
✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models