unlearn and KnowUnDo
These two projects appear to be **competitors**, likely representing distinct research efforts from the same institution (Zhejiang University NLP group, `zjunlp`) offering different algorithmic approaches or frameworks for knowledge unlearning in large language models, with each being presented at a different top-tier NLP conference in successive years (EMNLP 2024 for KnowUnDo, ACL 2025 for unlearn).
About unlearn
zjunlp/unlearn
[ACL 2025] Knowledge Unlearning for Large Language Models
This project helps machine learning engineers and researchers remove specific information or sensitive content from large language models (LLMs) after they've been trained. It takes a pre-trained LLM and a dataset of information you want the model to "forget," and outputs a modified LLM that no longer retains that specific knowledge. This is for professionals who develop and manage LLMs and need to ensure data privacy or correct factual errors.
About KnowUnDo
zjunlp/KnowUnDo
[EMNLP 2024] To Forget or Not? Towards Practical Knowledge Unlearning for Large Language Models
This project helps developers of large language models (LLMs) remove specific, sensitive information from their models without accidentally deleting essential, unrelated knowledge. It provides a benchmark dataset and a method, MemFlex, to unlearn information like copyrighted content or private user data. The output is an LLM that has "forgotten" particular details while retaining general knowledge, which is useful for machine learning engineers and data scientists building and maintaining LLMs.
Scores updated daily from GitHub, PyPI, and npm data. How scores work