stefdesabbata/geospatial-mechanistic-interpretability

Geospatial Mechanistic Interpretability of Large Language Models

23
/ 100
Experimental

This project helps researchers and academics understand how large language models (LLMs) process geographical information internally. It takes LLM responses to placenames and applies spatial analysis techniques to reveal how these models represent and "think" about locations. The output provides insights into the internal mechanisms of LLMs concerning geospatial data, useful for academics studying AI behavior.

No commits in the last 6 months.

Use this if you are a researcher or academic specifically interested in dissecting the internal workings of large language models when they deal with geographic data, aiming to understand their spatial reasoning.

Not ideal if you are looking for a tool to directly improve an LLM's geographical accuracy or to simply apply LLMs to geospatial tasks without needing to understand their internal representations.

geographical-AI LLM-interpretability spatial-reasoning computational-geography AI-ethics-and-bias
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 15 / 25
Community 0 / 25

How are scores calculated?

Stars

18

Forks

Language

Jupyter Notebook

License

MIT

Last pushed

May 12, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/stefdesabbata/geospatial-mechanistic-interpretability"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.