LightMem and EverMemOS
LightMem provides an efficient memory-augmented generation framework for individual LLM inference, while EverMemOS offers persistent long-term memory infrastructure across multiple agents and platforms—making them complementary components that could be combined (LightMem's in-context memory with EverMemOS's persistent storage).
About LightMem
zjunlp/LightMem
[ICLR 2026] LightMem: Lightweight and Efficient Memory-Augmented Generation
This is a lightweight and efficient memory management framework for Large Language Models (LLMs) and AI Agents. It helps these AI systems remember and use information over long interactions, overcoming the limitations of short-term memory. Developers building intelligent applications can use this to give their AI systems long-term memory capabilities.
About EverMemOS
EverMind-AI/EverMemOS
Long-term memory for your 24/7 OpenClaw agents across LLMs and platforms.
This project provides long-term memory capabilities for AI agents, allowing them to remember past interactions and information across various platforms and sessions. It takes conversations, documents, or observations as input and helps the AI agent recall relevant context. This is ideal for developers building always-on, continuously learning AI assistants, virtual characters, or automated systems that need to maintain context over time.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work