YJiangcm/LTE

[ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing

33
/ 100
Emerging

This framework helps developers update the knowledge of large language models (LLMs) without retraining the entire model. You provide a dataset of correct information, and the framework teaches the LLM to apply these updates when answering questions. The output is a fine-tuned LLM that accurately incorporates the new knowledge while maintaining its general conversational abilities. This is for AI/ML engineers or researchers who manage and deploy LLMs.

No commits in the last 6 months.

Use this if you need to efficiently update factual information within an existing LLM, such as correcting outdated data or adding new details, without incurring the high computational cost of full model retraining.

Not ideal if you're looking for a user-friendly tool to casually tweak an LLM's behavior without writing code or if you need to make extensive, broad changes to an LLM's foundational understanding rather than specific knowledge updates.

LLM-fine-tuning model-alignment knowledge-base-management AI-model-maintenance machine-learning-engineering
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

37

Forks

4

Language

Python

License

Apache-2.0

Last pushed

Aug 19, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/YJiangcm/LTE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.