zjunlp/MemBase
A Comprehensive Benchmarking Framework for Long-Term Conversational Memory Layers
This framework helps developers systematically compare and test different memory systems for long-running AI conversations. It takes conversational transcripts as input and evaluates how well different memory technologies build, retrieve, and use information to answer questions. AI engineers and researchers working on large language models would use this to ensure their conversational agents can accurately recall and use past interactions.
Use this if you are developing or integrating conversational AI agents and need to rigorously benchmark how different memory architectures impact their ability to maintain long-term context and answer questions accurately.
Not ideal if you are an end-user looking for a pre-built conversational AI or a general-purpose language model, as this is a technical benchmarking tool for AI developers.
Stars
11
Forks
1
Language
Python
License
MIT
Category
Last pushed
Mar 25, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/zjunlp/MemBase"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
google/langfun
OO for LLMs
tanaos/artifex
Small Language Model Inference, Fine-Tuning and Observability. No GPU, no labeled data needed.
preligens-lab/textnoisr
Adding random noise to a text dataset, and controlling very accurately the quality of the result
vulnerability-lookup/VulnTrain
A tool to generate datasets and models based on vulnerabilities descriptions from @Vulnerability-Lookup.
masakhane-io/masakhane-mt
Machine Translation for Africa