AlphaPav/mem-kk-logic

On Memorization of Large Language Models in Logical Reasoning

37
/ 100
Emerging

This project helps AI researchers understand how Large Language Models (LLMs) solve logical reasoning puzzles, specifically 'Knights and Knaves' problems. It takes an LLM's performance on these puzzles, along with various perturbed versions, and outputs insights into whether the model is truly reasoning or just memorizing the training data. AI researchers and cognitive scientists working with LLMs would use this.

No commits in the last 6 months.

Use this if you are an AI researcher investigating whether an LLM's logical reasoning ability is due to genuine understanding or simply memorizing training examples.

Not ideal if you are looking for a tool to directly improve an LLM's performance on a real-world reasoning task, as this project focuses on analysis rather than application.

AI-research LLM-evaluation cognitive-science reasoning-analysis machine-learning-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

76

Forks

8

Language

Python

License

MIT

Last pushed

Mar 29, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AlphaPav/mem-kk-logic"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.