mohammadtavakoli78/BEAM
[ICLR 2026] Beyond a Million Tokens: Benchmarking and Enhancing Long-Term Memory in LLMs
This project helps evaluate and improve how well large language models (LLMs) remember information over very long conversations, up to ten million tokens. It provides a benchmark of diverse, lengthy dialogues with specific questions to test an LLM's memory, and a framework to enhance an LLM's ability to recall past information. AI researchers and developers working on conversational AI systems would use this to build more robust and context-aware LLMs.
Use this if you are developing or evaluating large language models and need to rigorously test and improve their long-term memory across extended, multi-turn conversations in various domains.
Not ideal if you are a general user looking for an out-of-the-box conversational AI, or if you only work with short, simple interactions that don't require extensive context retention.
Stars
19
Forks
3
Language
Python
License
MIT
Category
Last pushed
Feb 02, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mohammadtavakoli78/BEAM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
vcanchik/robotmem
Robot memory
roychowdhuryresearch/gsw-memory
Long term Structured Memory for Large Language Models
gs-ai/mlm-memory
A functionally operational, mathematically unhinged system for achieving 10× effective memory...
ryanlingo/dynamic-context-evolution
Dynamic Context Evolution (DCE): Scalable synthetic data generation from a single LLM without...