metawake/prompt_compressor

Compresses LLM prompts while preserving semantic meaning to reduce token usage and cost.

29
/ 100
Experimental

Compresses text prompts for large language models (LLMs) to reduce the number of tokens used, which lowers processing costs and speeds up response times. It takes your original prompt as input and outputs a shorter, semantically equivalent version. This tool is for anyone working with LLMs who wants to optimize their token usage, such as AI application developers, content creators, or researchers.

No commits in the last 6 months.

Use this if you frequently send long prompts to LLMs and want to save on API costs or improve processing efficiency without losing the core meaning of your message.

Not ideal if your prompts are already very short or if you require precise, unedited input for very sensitive or niche LLM applications.

LLM cost optimization prompt engineering AI application development token management content generation
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 4 / 25
Maturity 15 / 25
Community 8 / 25

How are scores calculated?

Stars

8

Forks

1

Language

Python

License

MIT

Last pushed

Apr 14, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/metawake/prompt_compressor"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.