Siddhant-K-code/distill
Reliable LLM outputs start with clean context. Deterministic deduplication, compression, and caching for RAG pipelines.
When working with AI agents or large language models, this tool ensures your inputs are clear and concise. It takes raw, potentially redundant information from various sources like documents, memory, or tools, and provides a cleaned-up, unique, and compressed context. This results in more reliable, consistent, and cost-effective outputs from your AI.
136 stars.
Use this if your AI agent or LLM is producing inconsistent or confusing answers due to too much repetitive information in its input context.
Not ideal if you need a solution for improving the core reasoning capabilities of the LLM itself, rather than refining its input.
Stars
136
Forks
14
Language
Go
License
AGPL-3.0
Last pushed
Feb 24, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/Siddhant-K-code/distill"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
pesu-dev/ask-pesu
A RAG pipeline for question answering about PES University
louisbrulenaudet/ragoon
High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized...
B-A-M-N/FlockParser
Distributed document RAG system with intelligent GPU/CPU orchestration. Auto-discovers...
namtroi/RAGBase
Open Source RAG ETL Platform. Turns PDFs, Docs & Slides into queryable vectors. Features a...
aws-samples/rag-with-amazon-postgresql-using-pgvector-and-sagemaker
Question Answering application with Large Language Models (LLMs) and Amazon Postgresql using pgvector