LightMem and EverMemOS

LightMem provides an efficient memory-augmented generation framework for individual LLM inference, while EverMemOS offers persistent long-term memory infrastructure across multiple agents and platforms—making them complementary components that could be combined (LightMem's in-context memory with EverMemOS's persistent storage).

LightMem
67
Established
EverMemOS
61
Established
Maintenance 17/25
Adoption 10/25
Maturity 24/25
Community 16/25
Maintenance 17/25
Adoption 10/25
Maturity 13/25
Community 21/25
Stars: 677
Forks: 58
Downloads:
Commits (30d): 6
Language: Python
License: MIT
Stars: 2,570
Forks: 283
Downloads:
Commits (30d): 11
Language: Python
License: Apache-2.0
No Dependents
No Package No Dependents

About LightMem

zjunlp/LightMem

[ICLR 2026] LightMem: Lightweight and Efficient Memory-Augmented Generation

This is a lightweight and efficient memory management framework for Large Language Models (LLMs) and AI Agents. It helps these AI systems remember and use information over long interactions, overcoming the limitations of short-term memory. Developers building intelligent applications can use this to give their AI systems long-term memory capabilities.

AI development Large Language Models AI agents memory management application development

About EverMemOS

EverMind-AI/EverMemOS

Long-term memory for your 24/7 OpenClaw agents across LLMs and platforms.

This project provides long-term memory capabilities for AI agents, allowing them to remember past interactions and information across various platforms and sessions. It takes conversations, documents, or observations as input and helps the AI agent recall relevant context. This is ideal for developers building always-on, continuously learning AI assistants, virtual characters, or automated systems that need to maintain context over time.

AI Agent Development Conversational AI Virtual Assistants Contextual AI Persistent Memory

Scores updated daily from GitHub, PyPI, and npm data. How scores work