kmeng01/memit
Mass-editing thousands of facts into a transformer memory (ICLR 2023)
This project helps machine learning engineers and researchers efficiently update the knowledge within large language models. You input specific factual corrections or new information, like changing "LeBron James plays football" to "LeBron James plays basketball," and it outputs a modified language model that reflects these changes, potentially thousands at once. It's designed for those who work directly with transformer-based language models and need to inject new facts or correct misinformation.
543 stars. No commits in the last 6 months.
Use this if you need to programmatically edit or update many facts directly within a pre-trained transformer model's memory without retraining the entire model.
Not ideal if you are an end-user of a language model and don't have the technical expertise to work with model weights and Python code.
Stars
543
Forks
72
Language
Python
License
MIT
Category
Last pushed
Jan 31, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kmeng01/memit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
steering-vectors/steering-vectors
Steering vectors for transformer language models in Pytorch / Huggingface
jianghoucheng/AlphaEdit
AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models, ICLR 2025 (Outstanding Paper)
boyiwei/alignment-attribution-code
[ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications
jianghoucheng/AnyEdit
AnyEdit: Edit Any Knowledge Encoded in Language Models, ICML 2025
zjunlp/KnowledgeCircuits
[NeurIPS 2024] Knowledge Circuits in Pretrained Transformers