samuelfaj/distill

Distill large CLI outputs into small answers for LLMs and save tokens!

34
/ 100
Emerging

This tool helps developers and operations engineers make their AI agents more efficient when processing command-line output. It takes lengthy logs, test results, or command outputs, and, based on your specific question, distills them into a concise answer. This enables AI assistants to quickly grasp key information from extensive raw data, saving time and computational resources.

262 stars.

Use this if you are a developer, operations engineer, or IT professional who uses AI agents to process large amounts of command-line output and want to reduce token usage and improve agent efficiency.

Not ideal if you need the exact, uncompressed raw output from a command, or if you are working with an interactive command-line interface.

AI-agent-efficiency developer-tools DevOps command-line-automation log-analysis
No License No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 3 / 25
Community 11 / 25

How are scores calculated?

Stars

262

Forks

16

Language

TypeScript

License

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/samuelfaj/distill"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.