Mattbusel/Every-Other-Token
A real-time LLM stream interceptor for token-level interaction research
This project helps researchers and practitioners investigate how Large Language Models (LLMs) generate text, token by token. It intercepts the real-time stream of an LLM's response, providing insights into per-token confidence and perplexity, and allowing for real-time manipulation of the output. End-users like AI researchers, red teamers, and prompt engineers can use this to understand, test, and refine LLM behavior.
Use this if you need to deeply understand the underlying mechanics of LLM text generation, perform systematic testing of prompts or model vulnerabilities, or visualize token-level confidence and attribution.
Not ideal if you're solely interested in high-level LLM application development without needing fine-grained, token-level analysis or stream manipulation.
Stars
24
Forks
1
Language
Rust
License
—
Category
Last pushed
Mar 09, 2026
Monthly downloads
22
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Mattbusel/Every-Other-Token"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jmuncor/tokentap
Intercept LLM API traffic and visualize token usage in a real-time terminal dashboard. Track...
AgentOps-AI/tokencost
Easy token price estimates for 400+ LLMs. TokenOps.
Merit-Systems/echo
The User Pays AI SDK
Ruthwik000/tokenfirewall
Scalable LLM cost enforcement middleware for Node.js with budget protection and multi-provider support
adarshxs/TokenTally
Estimate Your LLM's Token Toll Across Various Platforms and Configurations