samuelfaj/distill
Distill large CLI outputs into small answers for LLMs and save tokens!
This tool helps developers and operations engineers make their AI agents more efficient when processing command-line output. It takes lengthy logs, test results, or command outputs, and, based on your specific question, distills them into a concise answer. This enables AI assistants to quickly grasp key information from extensive raw data, saving time and computational resources.
262 stars.
Use this if you are a developer, operations engineer, or IT professional who uses AI agents to process large amounts of command-line output and want to reduce token usage and improve agent efficiency.
Not ideal if you need the exact, uncompressed raw output from a command, or if you are working with an interactive command-line interface.
Stars
262
Forks
16
Language
TypeScript
License
—
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/samuelfaj/distill"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mangiucugna/json_repair
A python module to repair invalid JSON from LLMs
antfu/shiki-stream
Streaming highlighting with Shiki. Useful for highlighting text streams like LLM outputs.
iw4p/partialjson
+1M Downloads! Repair invalid LLM JSON, commonly used to parse the output of LLMs — Parsing...
yokingma/fetch-sse
An easy API for making Event Source requests, with all the features of fetch(), Supports...
kaptinlin/jsonrepair
A high-performance Golang library for easily repairing invalid JSON documents. Designed to fix...