taylorsatula/TeaLeaves

End-to-end pipeline for seeing how LLMs actually process your prompts. Capture attention across every layer, render heatmaps and cooking curves, compare variants with evidence — not vibes.

19
/ 100
Experimental

This tool helps you understand how large language models (LLMs) interpret your prompts and where they focus their 'attention.' You input your system prompt, user messages, and define specific regions of interest within them. The tool then outputs visual heatmaps and 'cooking curves' that show you which parts of your prompt are influencing the model at each internal processing layer, allowing you to compare different prompt versions.

Use this if you are a prompt engineer, AI product manager, or researcher needing to visualize and debug why an LLM responds in a certain way, especially when iteratively refining prompts for better performance.

Not ideal if you're looking for a simple API to evaluate LLM output quality without needing detailed internal interpretability.

prompt-engineering LLM-fine-tuning AI-explainability model-debugging generative-AI
No License No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 3 / 25
Community 0 / 25

How are scores calculated?

Stars

20

Forks

Language

Python

License

Last pushed

Mar 08, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/taylorsatula/TeaLeaves"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.