zjunlp/CaKE
[EMNLP 2025] Circuit-Aware Editing Enables Generalizable Knowledge Learners
This tool is for researchers and developers working with large language models to improve their ability to answer complex, multi-step reasoning questions. It takes existing knowledge bases and model outputs, then helps refine the model's internal 'circuits' to produce more accurate and generalizable answers, especially for questions requiring 'hopping' between facts.
Use this if you are developing or evaluating language models and need to enhance their performance on multi-hop reasoning tasks.
Not ideal if you are looking for an off-the-shelf solution for general knowledge retrieval or simple fact-checking.
Stars
19
Forks
3
Language
Python
License
MIT
Category
Last pushed
Nov 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/zjunlp/CaKE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
zjunlp/KnowledgeEditingPapers
Must-read Papers on Knowledge Editing for Large Language Models.
zjunlp/unlearn
[ACL 2025] Knowledge Unlearning for Large Language Models
OFA-Sys/Ditto
A self-ailgnment method for role-play. Benchmark for role-play. Resources for "Large Language...
zjunlp/AutoSteer
[EMNLP 2025] AutoSteer: Automating Steering for Safe Multimodal Large Language Models
VinAIResearch/HPR
Householder Pseudo-Rotation: A Novel Approach to Activation Editing in LLMs with...