zjunlp/PitfallsKnowledgeEditing
[ICLR 2024] Unveiling the Pitfalls of Knowledge Editing for Large Language Models
This project helps evaluate the potential problems when updating the factual knowledge within large language models (LLMs) without full retraining. It takes structured data representing new facts or relationships to be edited into an LLM and outputs metrics and visualizations showing potential 'knowledge conflicts' (when new edits contradict each other) or 'knowledge distortions' (when edits unintentionally break existing, correct knowledge). This is for researchers or engineers working on deploying and maintaining LLMs in real-world applications where factual accuracy and consistency are crucial.
No commits in the last 6 months.
Use this if you are actively performing knowledge editing on LLMs and need to understand and mitigate unintended side effects like conflicts or distortions in the model's knowledge base.
Not ideal if you are looking for a tool to perform the knowledge editing itself, as this project focuses on evaluating the pitfalls rather than implementing the editing methods.
Stars
22
Forks
2
Language
Python
License
MIT
Category
Last pushed
Jun 13, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/zjunlp/PitfallsKnowledgeEditing"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
zjunlp/KnowledgeEditingPapers
Must-read Papers on Knowledge Editing for Large Language Models.
zjunlp/CaKE
[EMNLP 2025] Circuit-Aware Editing Enables Generalizable Knowledge Learners
zjunlp/unlearn
[ACL 2025] Knowledge Unlearning for Large Language Models
OFA-Sys/Ditto
A self-ailgnment method for role-play. Benchmark for role-play. Resources for "Large Language...
zjunlp/AutoSteer
[EMNLP 2025] AutoSteer: Automating Steering for Safe Multimodal Large Language Models