Mattbusel/Token-Visualizer
The ultimate tool for analyzing, visualizing, and optimizing your LLM prompts
This tool helps anyone working with Large Language Models (LLMs) understand and reduce the cost of their prompts. You input your text prompt, and it shows you exactly how many tokens it uses, highlighting expensive sections. The output includes a breakdown of token usage, efficiency metrics, and suggestions to make your prompts shorter and more cost-effective.
Use this if you are building applications with LLMs and want to minimize API costs by optimizing the length and efficiency of your text prompts.
Not ideal if you are not using LLMs or if you only need a basic word count and are not concerned with token-level cost optimization.
Stars
9
Forks
1
Language
Python
License
—
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/Mattbusel/Token-Visualizer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
connectaman/LoPace
LoPace is a bi-directional encoding framework designed to reduce the storage footprint of...
LakshmiN5/promptqc
ESLint for your system prompts — catch contradictions, anti-patterns, injection vulnerabilities,...
roli-lpci/lintlang
Static linter for AI agent tool descriptions, system prompts, and configs. Catches vague...
sbsaga/toon
TOON — Laravel AI package for compact, human-readable, token-efficient data format with JSON ⇄...
nooscraft/tokuin
CLI tool – estimates LLM tokens/costs and runs provider-aware load tests for OpenAI, Anthropic,...