base76-research-lab/token-compressor
Token compressor
This tool helps anyone working with Large Language Models (LLMs) to reduce their operational costs and improve efficiency. It takes your verbose prompts and condenses them into shorter, semantically equivalent versions, ensuring critical instructions like 'only do X if Y' are always preserved. The outcome is a significantly shorter prompt that costs less to process while maintaining your original intent.
Use this if you frequently send long prompts to LLMs and want to reduce token usage and associated costs without sacrificing the clarity or intent of your instructions.
Not ideal if your prompts are already very short (under 80 tokens) or if you need extremely aggressive compression that might alter subtle nuances of meaning, as this prioritizes semantic preservation.
Stars
8
Forks
1
Language
Python
License
MIT
Category
Last pushed
Mar 08, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mcp/base76-research-lab/token-compressor"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DMontgomery40/deepseek-mcp-server
Model Context Protocol server for DeepSeek's advanced language models
upstash/context7
Context7 Platform -- Up-to-date code documentation for LLMs and AI code editors
graphlit/graphlit-mcp-server
Model Context Protocol (MCP) Server for Graphlit Platform
dvcrn/mcp-server-siri-shortcuts
MCP for calling Siri Shorcuts from LLMs
rawveg/ollama-mcp
An MCP Server for Ollama