taylorsatula/TeaLeaves
End-to-end pipeline for seeing how LLMs actually process your prompts. Capture attention across every layer, render heatmaps and cooking curves, compare variants with evidence — not vibes.
This tool helps you understand how large language models (LLMs) interpret your prompts and where they focus their 'attention.' You input your system prompt, user messages, and define specific regions of interest within them. The tool then outputs visual heatmaps and 'cooking curves' that show you which parts of your prompt are influencing the model at each internal processing layer, allowing you to compare different prompt versions.
Use this if you are a prompt engineer, AI product manager, or researcher needing to visualize and debug why an LLM responds in a certain way, especially when iteratively refining prompts for better performance.
Not ideal if you're looking for a simple API to evaluate LLM output quality without needing detailed internal interpretability.
Stars
20
Forks
—
Language
Python
License
—
Category
Last pushed
Mar 08, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/taylorsatula/TeaLeaves"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
dottxt-ai/outlines
Structured Outputs
takashiishida/arxiv-to-prompt
Transform arXiv papers into a single LaTeX source that can be used as a prompt for asking LLMs...
microsoft/promptpex
Test Generation for Prompts
Spr-Aachen/LLM-PromptMaster
A simple LLM-Powered chatbot software.
AI-secure/aug-pe
[ICML 2024 Spotlight] Differentially Private Synthetic Data via Foundation Model APIs 2: Text