kmeng01/rome

Locating and editing factual associations in GPT (NeurIPS 2022)

51
/ 100
Established

This project helps machine learning researchers and practitioners understand and precisely modify the factual knowledge stored within large language models like GPT-2 and GPT-J. You input a pre-trained GPT model and a factual statement you want to change (e.g., 'LeBron James plays football'), and it outputs the modified model with the new information, along with insights into how the model processes information. This is for those working on improving the accuracy and controllability of large AI text generators.

737 stars. No commits in the last 6 months.

Use this if you need to directly alter specific factual associations within a large language model without retraining the entire model, or if you want to trace how these models store and retrieve information.

Not ideal if you're looking for a general-purpose fine-tuning solution for task-specific adaptations, or if you don't work with PyTorch-based HuggingFace transformer models.

large-language-models AI-model-editing factual-knowledge-injection model-interpretability natural-language-processing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

737

Forks

162

Language

Python

License

MIT

Last pushed

Apr 20, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kmeng01/rome"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.