zjunlp/KnowUnDo

[EMNLP 2024] To Forget or Not? Towards Practical Knowledge Unlearning for Large Language Models

27
/ 100
Experimental

This project helps developers of large language models (LLMs) remove specific, sensitive information from their models without accidentally deleting essential, unrelated knowledge. It provides a benchmark dataset and a method, MemFlex, to unlearn information like copyrighted content or private user data. The output is an LLM that has "forgotten" particular details while retaining general knowledge, which is useful for machine learning engineers and data scientists building and maintaining LLMs.

No commits in the last 6 months.

Use this if you need to precisely remove specific facts or sensitive data from your large language model to comply with privacy regulations or copyright, without degrading its overall performance.

Not ideal if you're looking for a user-friendly application to directly interact with an LLM, as this is a developer tool for modifying the underlying model.

LLM development data privacy model fine-tuning knowledge management AI ethics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 3 / 25

How are scores calculated?

Stars

47

Forks

1

Language

Python

License

MIT

Last pushed

Jan 23, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/zjunlp/KnowUnDo"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.