Lanerra/reasoning-bank-slm

An experiment that applies Google Research's `ReasoningBank` technique to Small Language Models. This experiment hopes to show that the same gains from the ReasoningBank paper also applies to much smaller, less capable models.

33
/ 100
Emerging

This project helps AI developers and researchers make small language models smarter at complex reasoning tasks like solving math problems. It does this by giving the model a 'memory' where it stores successful and unsuccessful problem-solving strategies. When faced with a new problem, the model retrieves relevant strategies from its memory to guide its decision-making, leading to improved performance with less computational cost.

No commits in the last 6 months.

Use this if you are a developer or researcher working with small language models (under 4 billion parameters) and need to improve their reasoning capabilities without scaling up model size.

Not ideal if you are working with very large language models or are not interested in memory-based self-improvement techniques.

small-language-models AI-research model-optimization reasoning-systems machine-learning-engineering
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 9 / 25
Maturity 7 / 25
Community 15 / 25

How are scores calculated?

Stars

99

Forks

13

Language

Python

License

Last pushed

Oct 14, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Lanerra/reasoning-bank-slm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.