base76-research-lab/token-compressor

Token compressor

33
/ 100
Emerging

This tool helps anyone working with Large Language Models (LLMs) to reduce their operational costs and improve efficiency. It takes your verbose prompts and condenses them into shorter, semantically equivalent versions, ensuring critical instructions like 'only do X if Y' are always preserved. The outcome is a significantly shorter prompt that costs less to process while maintaining your original intent.

Use this if you frequently send long prompts to LLMs and want to reduce token usage and associated costs without sacrificing the clarity or intent of your instructions.

Not ideal if your prompts are already very short (under 80 tokens) or if you need extremely aggressive compression that might alter subtle nuances of meaning, as this prioritizes semantic preservation.

LLM-prompt-optimization AI-workflow-efficiency text-summarization cost-reduction natural-language-processing
No Package No Dependents
Maintenance 10 / 25
Adoption 4 / 25
Maturity 11 / 25
Community 8 / 25

How are scores calculated?

Stars

8

Forks

1

Language

Python

License

MIT

Last pushed

Mar 08, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mcp/base76-research-lab/token-compressor"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.