UCSB-NLP-Chang/ULD
Implementation of paper 'Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference' [NeurIPS'24]
This project helps machine learning engineers and researchers manage sensitive or outdated information in large language models (LLMs). It provides a framework to efficiently 'unlearn' specific data from an LLM, such as proprietary information or biased content. You input your existing LLM and the data you want it to forget, and it outputs a modified LLM that no longer retains that information, while preserving its general knowledge.
No commits in the last 6 months.
Use this if you need to remove specific factual knowledge or biases from a pre-trained large language model without retraining it from scratch, ensuring data privacy or regulatory compliance.
Not ideal if you are a business user or general data scientist looking for a no-code solution to filter content from LLMs, as this requires deep familiarity with LLM training and infrastructure.
Stars
26
Forks
3
Language
Python
License
MIT
Category
Last pushed
Jun 14, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/UCSB-NLP-Chang/ULD"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
rasbt/LLMs-from-scratch
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
facebookresearch/LayerSkip
Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024
FareedKhan-dev/train-llm-from-scratch
A straightforward method for training your LLM, from downloading data to generating text.
kmeng01/rome
Locating and editing factual associations in GPT (NeurIPS 2022)
datawhalechina/llms-from-scratch-cn
仅需Python基础,从0构建大语言模型;从0逐步构建GLM4\Llama3\RWKV6, 深入理解大模型原理