Siddhant-K-code/distill

Reliable LLM outputs start with clean context. Deterministic deduplication, compression, and caching for RAG pipelines.

46
/ 100
Emerging

When working with AI agents or large language models, this tool ensures your inputs are clear and concise. It takes raw, potentially redundant information from various sources like documents, memory, or tools, and provides a cleaned-up, unique, and compressed context. This results in more reliable, consistent, and cost-effective outputs from your AI.

136 stars.

Use this if your AI agent or LLM is producing inconsistent or confusing answers due to too much repetitive information in its input context.

Not ideal if you need a solution for improving the core reasoning capabilities of the LLM itself, rather than refining its input.

AI-assistant-reliability LLM-workflow-optimization context-management knowledge-base-processing
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 13 / 25
Community 13 / 25

How are scores calculated?

Stars

136

Forks

14

Language

Go

License

AGPL-3.0

Last pushed

Feb 24, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/Siddhant-K-code/distill"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.