IAAR-Shanghai/Awesome-Attention-Heads

An awesome repository & A comprehensive survey on interpretability of LLM attention heads.

28
/ 100
Experimental

This resource provides a curated collection of the latest research and a comprehensive survey on how attention heads within Large Language Models (LLMs) function. It helps AI researchers and practitioners understand the 'black box' of LLMs by showing what goes on inside these models, specifically how different parts contribute to reasoning and knowledge processing. The output is a categorized overview of research papers and methodologies, revealing the inner workings of LLMs.

400 stars. No commits in the last 6 months.

Use this if you are an AI researcher or practitioner looking to deepen your understanding of how Large Language Models (LLMs) make decisions and process information, specifically focusing on the interpretability of their attention mechanisms.

Not ideal if you are looking for a practical tool to build or deploy LLMs, or if you are not interested in the academic research and mechanistic interpretability of these models.

LLM interpretability Transformer architecture AI research Machine learning engineering Cognitive AI
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 10 / 25

How are scores calculated?

Stars

400

Forks

12

Language

TeX

License

Last pushed

Mar 02, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/IAAR-Shanghai/Awesome-Attention-Heads"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.