YJiangcm/LTE
[ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing
This framework helps developers update the knowledge of large language models (LLMs) without retraining the entire model. You provide a dataset of correct information, and the framework teaches the LLM to apply these updates when answering questions. The output is a fine-tuned LLM that accurately incorporates the new knowledge while maintaining its general conversational abilities. This is for AI/ML engineers or researchers who manage and deploy LLMs.
No commits in the last 6 months.
Use this if you need to efficiently update factual information within an existing LLM, such as correcting outdated data or adding new details, without incurring the high computational cost of full model retraining.
Not ideal if you're looking for a user-friendly tool to casually tweak an LLM's behavior without writing code or if you need to make extensive, broad changes to an LLM's foundational understanding rather than specific knowledge updates.
Stars
37
Forks
4
Language
Python
License
Apache-2.0
Category
Last pushed
Aug 19, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/YJiangcm/LTE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.