gao-g/prelude

Code for the paper "Aligning LLM Agents by Learning Latent Preference from User Edits".

27
/ 100
Experimental

This project helps AI developers and researchers refine how large language models (LLMs) learn user preferences. It takes in pairs of initial LLM outputs and subsequent user edits, then outputs an improved LLM agent that better anticipates user preferences. Developers working on LLM applications like summarization or email drafting would use this to make their models more aligned with human expectations.

No commits in the last 6 months.

Use this if you are developing or fine-tuning LLM agents and need a systematic way to incorporate user feedback and edits into their learning process.

Not ideal if you are an end-user looking for a ready-to-use application, as this project requires development expertise to implement and integrate agents.

LLM development AI alignment machine learning research natural language processing model refinement
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 3 / 25

How are scores calculated?

Stars

45

Forks

1

Language

Python

License

MIT

Last pushed

Nov 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/gao-g/prelude"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.