PurCL/muke

[COLM 2025] Official implementation of μKE - edit LLM knowledge while preserving memory dependencies via Matryoshka-style objectives.

15
/ 100
Experimental

When a Large Language Model (LLM) provides incorrect or outdated information, or exhibits unsafe behavior, this tool helps you update its knowledge without expensive retraining. You input the LLM and the specific factual changes you need to make, and it outputs a modified LLM that has learned the new information while preserving its existing knowledge dependencies. This is for researchers and developers working with and fine-tuning LLMs.

No commits in the last 6 months.

Use this if you need to efficiently correct or update factual knowledge within an LLM while maintaining the model's overall coherence and avoiding unintended disruptions to its memory.

Not ideal if you need to train a brand new model from scratch or make broad, foundational changes that go beyond targeted factual updates.

LLM editing knowledge updating model fine-tuning AI safety natural language processing
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

14

Forks

Language

Python

License

Last pushed

Aug 20, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/PurCL/muke"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.