AssafWoo/homebrew-ccr

LLM token optimizer for Claude Code. Reduce token costs by 60 - 99%. Zero config, installs in seconds, works silently in the background without changing your workflow.

37
/ 100
Emerging

This project dramatically reduces the cost of using large language models like Claude Code, Cursor, Gemini CLI, Cline, and VS Code Copilot for software development tasks. It works by intelligently compressing the output from command-line tools (like `pip install`, `cargo build`, `git status`) before the LLM processes it. Developers, DevOps engineers, and anyone using LLM-powered coding assistants will benefit from significantly lower token usage and costs.

Use this if you are a developer frequently using AI coding assistants for tasks involving command-line tool outputs and want to reduce your token costs without changing your workflow.

Not ideal if you primarily use LLMs for creative writing, data analysis, or other non-code-related tasks, or if you do not use the specified coding assistants.

software-development devops developer-tools coding-assistant cloud-cost-optimization
No Package No Dependents
Maintenance 13 / 25
Adoption 6 / 25
Maturity 9 / 25
Community 9 / 25

How are scores calculated?

Stars

18

Forks

2

Language

Rust

License

MIT

Category

context-tools

Last pushed

Apr 04, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ai-coding/AssafWoo/homebrew-ccr"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.