molereddy/Alternate-Preference-Optimization

[COLING 2025] code for "Alternate Preference Optimization for Unlearning Factual Knowledge in Large Language Models".

13
/ 100
Experimental

This project helps AI developers and researchers refine large language models (LLMs) by selectively removing specific factual knowledge without damaging other capabilities. You provide a trained LLM and define the information you want to unlearn. The output is a modified LLM that no longer contains the specified facts, ready for deployment or further evaluation. This is ideal for those managing responsible AI development or fine-tuning models.

No commits in the last 6 months.

Use this if you need to erase particular factual information from a large language model to enhance privacy, reduce bias, or correct outdated data.

Not ideal if you're looking for a simple, no-code solution to filter LLM outputs or prevent it from generating certain content without altering the model's fundamental knowledge.

large-language-models model-fine-tuning responsible-AI AI-ethics knowledge-unlearning
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

10

Forks

Language

Python

License

Last pushed

Jan 14, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/molereddy/Alternate-Preference-Optimization"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.