Eric2i/LLM-MindMap
EMNLP 2025 - "Mapping the Minds of LLMs: A Graph-Based Analysis of Reasoning LLMs", Official Implementation
This toolkit helps AI researchers and developers understand how large language models (LLMs) reason. It takes raw Chain-of-Thought (CoT) traces from an LLM's responses and transforms them into a structured 'reasoning graph'. This graph, along with computed metrics, reveals insights into the model's cognitive process beyond simple token counts, such as how it explores ideas or converges on conclusions.
Use this if you are a researcher or developer who needs to deeply analyze the internal reasoning steps of an LLM to evaluate its cognitive processes.
Not ideal if you are looking for a tool to simply deploy or fine-tune an LLM, or if you don't have Chain-of-Thought data.
Stars
12
Forks
1
Language
Python
License
—
Category
Last pushed
Oct 18, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Eric2i/LLM-MindMap"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cvs-health/uqlm
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM...
PRIME-RL/TTRL
[NeurIPS 2025] TTRL: Test-Time Reinforcement Learning
sapientinc/HRM
Hierarchical Reasoning Model Official Release
tigerchen52/query_level_uncertainty
query-level uncertainty in LLMs
reasoning-survey/Awesome-Reasoning-Foundation-Models
✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models