mohammadtavakoli78/BEAM

[ICLR 2026] Beyond a Million Tokens: Benchmarking and Enhancing Long-Term Memory in LLMs

41
/ 100
Emerging

This project helps evaluate and improve how well large language models (LLMs) remember information over very long conversations, up to ten million tokens. It provides a benchmark of diverse, lengthy dialogues with specific questions to test an LLM's memory, and a framework to enhance an LLM's ability to recall past information. AI researchers and developers working on conversational AI systems would use this to build more robust and context-aware LLMs.

Use this if you are developing or evaluating large language models and need to rigorously test and improve their long-term memory across extended, multi-turn conversations in various domains.

Not ideal if you are a general user looking for an out-of-the-box conversational AI, or if you only work with short, simple interactions that don't require extensive context retention.

conversational-ai large-language-models ai-evaluation long-context-understanding memory-augmented-ai
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 13 / 25
Community 12 / 25

How are scores calculated?

Stars

19

Forks

3

Language

Python

License

MIT

Last pushed

Feb 02, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mohammadtavakoli78/BEAM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.