zjunlp/KnowUnDo
[EMNLP 2024] To Forget or Not? Towards Practical Knowledge Unlearning for Large Language Models
This project helps developers of large language models (LLMs) remove specific, sensitive information from their models without accidentally deleting essential, unrelated knowledge. It provides a benchmark dataset and a method, MemFlex, to unlearn information like copyrighted content or private user data. The output is an LLM that has "forgotten" particular details while retaining general knowledge, which is useful for machine learning engineers and data scientists building and maintaining LLMs.
No commits in the last 6 months.
Use this if you need to precisely remove specific facts or sensitive data from your large language model to comply with privacy regulations or copyright, without degrading its overall performance.
Not ideal if you're looking for a user-friendly application to directly interact with an LLM, as this is a developer tool for modifying the underlying model.
Stars
47
Forks
1
Language
Python
License
MIT
Category
Last pushed
Jan 23, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/zjunlp/KnowUnDo"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
zjunlp/KnowledgeEditingPapers
Must-read Papers on Knowledge Editing for Large Language Models.
zjunlp/CaKE
[EMNLP 2025] Circuit-Aware Editing Enables Generalizable Knowledge Learners
zjunlp/unlearn
[ACL 2025] Knowledge Unlearning for Large Language Models
OFA-Sys/Ditto
A self-ailgnment method for role-play. Benchmark for role-play. Resources for "Large Language...
zjunlp/AutoSteer
[EMNLP 2025] AutoSteer: Automating Steering for Safe Multimodal Large Language Models