zjunlp/unlearn
[ACL 2025] Knowledge Unlearning for Large Language Models
This project helps machine learning engineers and researchers remove specific information or sensitive content from large language models (LLMs) after they've been trained. It takes a pre-trained LLM and a dataset of information you want the model to "forget," and outputs a modified LLM that no longer retains that specific knowledge. This is for professionals who develop and manage LLMs and need to ensure data privacy or correct factual errors.
No commits in the last 6 months.
Use this if you need to selectively remove certain knowledge, sensitive data, or outdated information from a large language model without retraining it from scratch.
Not ideal if you are looking to fine-tune a model for new tasks or add new knowledge, as its primary purpose is information removal.
Stars
48
Forks
7
Language
Python
License
MIT
Category
Last pushed
Sep 18, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/zjunlp/unlearn"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
zjunlp/KnowledgeEditingPapers
Must-read Papers on Knowledge Editing for Large Language Models.
zjunlp/CaKE
[EMNLP 2025] Circuit-Aware Editing Enables Generalizable Knowledge Learners
OFA-Sys/Ditto
A self-ailgnment method for role-play. Benchmark for role-play. Resources for "Large Language...
zjunlp/AutoSteer
[EMNLP 2025] AutoSteer: Automating Steering for Safe Multimodal Large Language Models
VinAIResearch/HPR
Householder Pseudo-Rotation: A Novel Approach to Activation Editing in LLMs with...