ashishpatel26/omnicache-ai

Unified multi-layer caching library for AI/agent pipelines — LangChain, LangGraph, AutoGen, CrewAI, Agno, A2A

51
/ 100
Established

This tool helps developers building AI agent applications to significantly reduce costs and speed up their applications. It works by intelligently storing and reusing previous responses from large language models (LLMs), embeddings, and retrieval queries. Anyone creating or managing AI applications that interact with LLMs and external data sources will find this useful.

Available on PyPI.

Use this if you are developing AI agent applications and want to avoid paying for the same LLM calls or data lookups repeatedly.

Not ideal if you are working with AI models that always require fresh, real-time responses and cannot tolerate cached data, or if you are not building an AI agent pipeline.

AI-application-development LLM-ops agent-frameworks AI-cost-management AI-performance-optimization
Maintenance 13 / 25
Adoption 6 / 25
Maturity 18 / 25
Community 14 / 25

How are scores calculated?

Stars

15

Forks

3

Language

Python

License

MIT

Last pushed

Mar 26, 2026

Commits (30d)

0

Dependencies

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/ashishpatel26/omnicache-ai"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.