Mattbusel/Every-Other-Token

A real-time LLM stream interceptor for token-level interaction research

30
/ 100
Emerging

This project helps researchers and practitioners investigate how Large Language Models (LLMs) generate text, token by token. It intercepts the real-time stream of an LLM's response, providing insights into per-token confidence and perplexity, and allowing for real-time manipulation of the output. End-users like AI researchers, red teamers, and prompt engineers can use this to understand, test, and refine LLM behavior.

Use this if you need to deeply understand the underlying mechanics of LLM text generation, perform systematic testing of prompts or model vulnerabilities, or visualize token-level confidence and attribution.

Not ideal if you're solely interested in high-level LLM application development without needing fine-grained, token-level analysis or stream manipulation.

LLM interpretability AI safety testing Prompt engineering Model debugging NLP research
No License No Package No Dependents
Maintenance 10 / 25
Adoption 9 / 25
Maturity 7 / 25
Community 4 / 25

How are scores calculated?

Stars

24

Forks

1

Language

Rust

License

Last pushed

Mar 09, 2026

Monthly downloads

22

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Mattbusel/Every-Other-Token"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.