jeffasante/latent-reasoning-transformer
Implemented a recurrent-depth LLM (PyTorch) based on arXiv:2502.05171. Demonstrated that scaling inference compute increased arithmetic reasoning accuracy from 8% to 100% without additional parameters.
Stars
1
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Nov 27, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/jeffasante/latent-reasoning-transformer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cvs-health/uqlm
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM...
PRIME-RL/TTRL
[NeurIPS 2025] TTRL: Test-Time Reinforcement Learning
sapientinc/HRM
Hierarchical Reasoning Model Official Release
tigerchen52/query_level_uncertainty
query-level uncertainty in LLMs
reasoning-survey/Awesome-Reasoning-Foundation-Models
✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models