UCSB-NLP-Chang/ULD

Implementation of paper 'Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference' [NeurIPS'24]

33
/ 100
Emerging

This project helps machine learning engineers and researchers manage sensitive or outdated information in large language models (LLMs). It provides a framework to efficiently 'unlearn' specific data from an LLM, such as proprietary information or biased content. You input your existing LLM and the data you want it to forget, and it outputs a modified LLM that no longer retains that information, while preserving its general knowledge.

No commits in the last 6 months.

Use this if you need to remove specific factual knowledge or biases from a pre-trained large language model without retraining it from scratch, ensuring data privacy or regulatory compliance.

Not ideal if you are a business user or general data scientist looking for a no-code solution to filter content from LLMs, as this requires deep familiarity with LLM training and infrastructure.

LLM fine-tuning model privacy responsible AI data governance AI model management
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

26

Forks

3

Language

Python

License

MIT

Last pushed

Jun 14, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/UCSB-NLP-Chang/ULD"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.