AikyamLab/llm-memorization

Understanding the memorization property of Large Language Models using Model Attribution

21
/ 100
Experimental

This project helps AI researchers and developers understand why Large Language Models (LLMs) memorize their training data. By analyzing the internal architecture, specifically attention modules, it shows which parts of an LLM are responsible for memorization versus generalization. It takes an LLM and tokenized memorized data as input, and outputs insights into architectural factors influencing memorization and performance, allowing for more ethical and private LLM deployment.

No commits in the last 6 months.

Use this if you are a machine learning researcher or engineer designing and deploying Large Language Models and need to diagnose and mitigate data memorization to improve privacy and ethical compliance.

Not ideal if you are an end-user simply looking to use an LLM without needing to understand or modify its internal architectural properties related to memorization.

Large Language Models AI Ethics Model Privacy Neural Network Interpretability Machine Learning Research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

9

Forks

Language

Python

License

MIT

Last pushed

Mar 17, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AikyamLab/llm-memorization"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.