sodiumsun/snackcache

Drop-in caching proxy for OpenAI and Anthropic APIs - stop paying for the same answer twice!

30
/ 100
Emerging

This tool acts as an intermediary for your OpenAI and Anthropic API calls, helping you reduce costs and speed up responses. It takes your requests, checks if a similar one has been made before, and if so, provides the stored answer instead of calling the external API again. This is for developers building applications that use large language models and want to optimize their API spending and performance.

Use this if you are a developer using OpenAI or Anthropic APIs and want to save money and reduce latency by caching responses to similar queries.

Not ideal if your application generates completely unique prompts for every API call, as caching would provide minimal benefits.

API-optimization LLM-development cost-management developer-tooling AI-application-development
No Package No Dependents
Maintenance 6 / 25
Adoption 8 / 25
Maturity 13 / 25
Community 3 / 25

How are scores calculated?

Stars

43

Forks

1

Language

Python

License

MIT

Last pushed

Jan 08, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/sodiumsun/snackcache"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.