metawake/prompt_compressor
Compresses LLM prompts while preserving semantic meaning to reduce token usage and cost.
Compresses text prompts for large language models (LLMs) to reduce the number of tokens used, which lowers processing costs and speeds up response times. It takes your original prompt as input and outputs a shorter, semantically equivalent version. This tool is for anyone working with LLMs who wants to optimize their token usage, such as AI application developers, content creators, or researchers.
No commits in the last 6 months.
Use this if you frequently send long prompts to LLMs and want to save on API costs or improve processing efficiency without losing the core meaning of your message.
Not ideal if your prompts are already very short or if you require precise, unedited input for very sensitive or niche LLM applications.
Stars
8
Forks
1
Language
Python
License
MIT
Category
Last pushed
Apr 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/metawake/prompt_compressor"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
connectaman/LoPace
LoPace is a bi-directional encoding framework designed to reduce the storage footprint of...
LakshmiN5/promptqc
ESLint for your system prompts — catch contradictions, anti-patterns, injection vulnerabilities,...
roli-lpci/lintlang
Static linter for AI agent tool descriptions, system prompts, and configs. Catches vague...
sbsaga/toon
TOON — Laravel AI package for compact, human-readable, token-efficient data format with JSON ⇄...
nooscraft/tokuin
CLI tool – estimates LLM tokens/costs and runs provider-aware load tests for OpenAI, Anthropic,...