ashishpatel26/omnicache-ai
Unified multi-layer caching library for AI/agent pipelines — LangChain, LangGraph, AutoGen, CrewAI, Agno, A2A
This tool helps developers building AI agent applications to significantly reduce costs and speed up their applications. It works by intelligently storing and reusing previous responses from large language models (LLMs), embeddings, and retrieval queries. Anyone creating or managing AI applications that interact with LLMs and external data sources will find this useful.
Available on PyPI.
Use this if you are developing AI agent applications and want to avoid paying for the same LLM calls or data lookups repeatedly.
Not ideal if you are working with AI models that always require fresh, real-time responses and cannot tolerate cached data, or if you are not building an AI agent pipeline.
Stars
15
Forks
3
Language
Python
License
MIT
Category
Last pushed
Mar 26, 2026
Commits (30d)
0
Dependencies
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/ashishpatel26/omnicache-ai"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related agents
thedotmack/claude-mem
A Claude Code plugin that automatically captures everything Claude does during your coding...
Open-Source-Legal/OpenContracts
Humans and AI agents, building knowledge bases together. Self-hosted document annotation,...
volcengine/MineContext
MineContext is your proactive context-aware AI partner(Context-Engineering+ChatGPT Pulse)
omega-memory/omega-memory
Persistent memory for AI coding agents
winstonkoh87/Athena-Public
The Linux OS for AI Agents — Persistent memory, autonomy, and time-awareness for any LLM. Own...