zjunlp/PitfallsKnowledgeEditing

[ICLR 2024] Unveiling the Pitfalls of Knowledge Editing for Large Language Models

30
/ 100
Emerging

This project helps evaluate the potential problems when updating the factual knowledge within large language models (LLMs) without full retraining. It takes structured data representing new facts or relationships to be edited into an LLM and outputs metrics and visualizations showing potential 'knowledge conflicts' (when new edits contradict each other) or 'knowledge distortions' (when edits unintentionally break existing, correct knowledge). This is for researchers or engineers working on deploying and maintaining LLMs in real-world applications where factual accuracy and consistency are crucial.

No commits in the last 6 months.

Use this if you are actively performing knowledge editing on LLMs and need to understand and mitigate unintended side effects like conflicts or distortions in the model's knowledge base.

Not ideal if you are looking for a tool to perform the knowledge editing itself, as this project focuses on evaluating the pitfalls rather than implementing the editing methods.

LLM fine-tuning AI model evaluation knowledge representation natural language processing model maintenance
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

22

Forks

2

Language

Python

License

MIT

Last pushed

Jun 13, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/zjunlp/PitfallsKnowledgeEditing"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.