anpl-code/Deblank
A reversible code minifier for AI. Save tokens by stripping code format in your prompt, then perfectly restore it in the responces.
Deblank helps AI developers reduce the cost of using large language models (LLMs) for coding tasks. It takes human-readable source code (like Java, Python, C#) and removes non-essential formatting, significantly lowering the token count for LLM prompts. The LLM then processes this compact code, and Deblank restores its original, readable format in the response. This tool is for developers building AI-powered coding assistants, code generation tools, or similar LLM-driven applications.
Use this if you are a developer building LLM applications that process or generate code and need to optimize token usage and cost without sacrificing code readability or semantic accuracy.
Not ideal if you are working with natural language processing tasks that do not involve source code, or if your primary concern is not LLM token efficiency.
Stars
18
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 23, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/anpl-code/Deblank"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
isEmmanuelOlowe/llm-cost-estimator
Estimating hardware and cloud costs of LLMs and transformer projects
WilliamJlvt/llm_price_scraper
A simple Python Scraper to retrieve pricing information for Large Language Models (LLMs) from an...
nuxdie/ai-pricing
Compare AI model pricing and performance in a simple interactive web app.
FareedKhan-dev/save-llm-api-cost
A straightforward method to reduce your LLM inference API costs and token usage.
paradite/llm-info
Information on LLM models, context window token limit, output token limit, pricing and more.