nooscraft/tokuin
CLI tool – estimates LLM tokens/costs and runs provider-aware load tests for OpenAI, Anthropic, OpenRouter, or custom endpoints.
This tool helps anyone working with AI models manage and reduce their costs. You provide your prompts or conversations, and it tells you how many tokens they use and what they will cost across different AI providers like OpenAI or Anthropic. It also offers a unique compression feature that can significantly shrink your prompts while keeping their meaning, so you pay less for the same instructions.
128 stars.
Use this if you are building applications with large language models and want to accurately estimate API costs, compare token usage across different models, or reduce the cost of your prompts and conversations.
Not ideal if you primarily work with open-source models hosted locally and are not concerned with API costs or token limits from commercial providers.
Stars
128
Forks
3
Language
Rust
License
—
Category
Last pushed
Feb 20, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/nooscraft/tokuin"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
connectaman/LoPace
LoPace is a bi-directional encoding framework designed to reduce the storage footprint of...
LakshmiN5/promptqc
ESLint for your system prompts — catch contradictions, anti-patterns, injection vulnerabilities,...
roli-lpci/lintlang
Static linter for AI agent tool descriptions, system prompts, and configs. Catches vague...
sbsaga/toon
TOON — Laravel AI package for compact, human-readable, token-efficient data format with JSON ⇄...
therohanparmar/t3-toon
TOON for TYPO3 — a compact, human-readable, and token-efficient data format for AI prompts & LLM...