zjunlp/MemBase

A Comprehensive Benchmarking Framework for Long-Term Conversational Memory Layers

36
/ 100
Emerging

This framework helps developers systematically compare and test different memory systems for long-running AI conversations. It takes conversational transcripts as input and evaluates how well different memory technologies build, retrieve, and use information to answer questions. AI engineers and researchers working on large language models would use this to ensure their conversational agents can accurately recall and use past interactions.

Use this if you are developing or integrating conversational AI agents and need to rigorously benchmark how different memory architectures impact their ability to maintain long-term context and answer questions accurately.

Not ideal if you are an end-user looking for a pre-built conversational AI or a general-purpose language model, as this is a technical benchmarking tool for AI developers.

conversational-ai llm-evaluation memory-management ai-benchmarking natural-language-processing
No Package No Dependents
Maintenance 13 / 25
Adoption 5 / 25
Maturity 11 / 25
Community 7 / 25

How are scores calculated?

Stars

11

Forks

1

Language

Python

License

MIT

Last pushed

Mar 25, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/zjunlp/MemBase"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.