aqstack/mimir

mimir is a drop-in proxy that caches LLM API responses using semantic similarity, reducing costs and latency for repeated or similar queries.

53
/ 100
Established

This tool helps developers reduce the costs and improve the speed of applications that use large language models (LLMs) like OpenAI. It acts as a go-between, taking in your LLM prompts and, if it finds a similar query it has seen before, returns a saved response instantly. If not, it forwards your request to the LLM and saves the new response. This is for software developers who are building or maintaining LLM-powered applications and want to optimize their API usage.

150 stars.

Use this if your application frequently sends similar or identical prompts to an LLM API and you want to save money and get faster responses.

Not ideal if your application's LLM prompts are always unique and never repeat, or if you need a persistent cache that isn't cleared when the proxy restarts.

LLM-development API-optimization AI-application-engineering cost-management latency-reduction
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 13 / 25
Community 24 / 25

How are scores calculated?

Stars

150

Forks

93

Language

Go

License

MIT

Category

llm-api-gateways

Last pushed

Dec 24, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/aqstack/mimir"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.