chatde/tokenshrink
Same AI, fewer tokens. Free forever. — tokenshrink.com
This tool helps professionals who use AI language models reduce the cost and improve the efficiency of their interactions. It takes your verbose prompts and condenses them into shorter versions, saving you money on token usage while maintaining the original meaning. Anyone from business analysts and marketers to medical staff and developers who regularly use tools like ChatGPT or Claude can benefit.
Available on npm.
Use this if you frequently interact with AI language models and want to lower your operational costs and potentially speed up responses by using fewer tokens.
Not ideal if your prompts are already very short and concise, as there might be no significant token savings.
Stars
7
Forks
—
Language
JavaScript
License
MIT
Category
Last pushed
Mar 10, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/chatde/tokenshrink"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
connectaman/LoPace
LoPace is a bi-directional encoding framework designed to reduce the storage footprint of...
LakshmiN5/promptqc
ESLint for your system prompts — catch contradictions, anti-patterns, injection vulnerabilities,...
roli-lpci/lintlang
Static linter for AI agent tool descriptions, system prompts, and configs. Catches vague...
sbsaga/toon
TOON — Laravel AI package for compact, human-readable, token-efficient data format with JSON ⇄...
nooscraft/tokuin
CLI tool – estimates LLM tokens/costs and runs provider-aware load tests for OpenAI, Anthropic,...