Eric2i/LLM-MindMap

EMNLP 2025 - "Mapping the Minds of LLMs: A Graph-Based Analysis of Reasoning LLMs", Official Implementation

24
/ 100
Experimental

This toolkit helps AI researchers and developers understand how large language models (LLMs) reason. It takes raw Chain-of-Thought (CoT) traces from an LLM's responses and transforms them into a structured 'reasoning graph'. This graph, along with computed metrics, reveals insights into the model's cognitive process beyond simple token counts, such as how it explores ideas or converges on conclusions.

Use this if you are a researcher or developer who needs to deeply analyze the internal reasoning steps of an LLM to evaluate its cognitive processes.

Not ideal if you are looking for a tool to simply deploy or fine-tune an LLM, or if you don't have Chain-of-Thought data.

LLM-analysis AI-research NLP-evaluation cognitive-modeling model-interpretability
No License No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 7 / 25
Community 6 / 25

How are scores calculated?

Stars

12

Forks

1

Language

Python

License

Last pushed

Oct 18, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Eric2i/LLM-MindMap"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.