nathanielknight/llm-questioncache

An LLM plugin to efficiently pose questions to LLMs, cache the answers, and quickly retrieve answers to questions that you've already posed.

28
/ 100
Experimental

This tool helps developers efficiently manage their interactions with Large Language Models (LLMs). It takes natural language questions as input, routes them to an LLM if new, or retrieves previously stored answers. The output is a concise answer to the question, and it's designed for developers who frequently query LLMs and want to save time and reduce API costs.

No commits in the last 6 months.

Use this if you are a developer who repeatedly asks similar questions to LLMs and wants to quickly retrieve past answers without incurring new API calls.

Not ideal if you rarely interact with LLMs or if every question you ask is unique and requires a fresh response.

LLM-interaction developer-tools workflow-efficiency API-cost-reduction response-caching
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Feb 09, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/nathanielknight/llm-questioncache"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.