mpecan/tokf
Config-driven CLI tool that compresses command output before it reaches an LLM context
This tool helps developers and operations engineers condense lengthy command-line output from tools like Git, Cargo, and Docker. It takes verbose command logs and applies filters to extract only the most critical information, delivering a concise summary. This makes it easier for AI agents or users to quickly understand results without sifting through noise.
119 stars and 254 monthly downloads.
Use this if you or your AI assistant frequently analyze command-line output and need to drastically reduce its length for clarity or to save on LLM context tokens.
Not ideal if you need to retain every detail of command output for deep debugging or if your workflows don't involve processing command-line logs with AI.
Stars
119
Forks
10
Language
Rust
License
MIT
Category
Last pushed
Mar 13, 2026
Monthly downloads
254
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/mpecan/tokf"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
rtk-ai/rtk
CLI proxy that reduces LLM token consumption by 60-90% on common dev commands. Single Rust...
jnsahaj/lumen
Beautiful git diff viewer, generate commits with AI, get summary of changes, all from the CLI
jkawamoto/ctranslate2-rs
Rust bindings for OpenNMT/CTranslate2
Reim-developer/Sephera
Fast Rust CLI for codebase metrics and deterministic LLM context packs
Topos-Labs/infiniloom
High-performance repository context generator for LLMs - Transform codebases into optimized...