jlin816/rewards-from-language

Code and data for "Inferring Rewards from Language in Context" [ACL 2022].

22
/ 100
Experimental

This project helps build intelligent systems that can understand a user's underlying preferences, not just their direct commands. By analyzing how people phrase requests, it can infer their general likes and dislikes. This allows the system to make better decisions in new situations, acting more like a helpful assistant than a simple instruction-follower. It's designed for researchers and developers working on AI agents or recommendation systems who want to build more intuitive and adaptive user experiences.

No commits in the last 6 months.

Use this if you are developing AI systems that need to learn user preferences from natural language to predict optimal actions in varied scenarios, beyond just direct commands.

Not ideal if you only need to process direct, unambiguous instructions where user preferences are not a factor in decision-making.

AI agent development natural language understanding user preference modeling adaptive systems human-AI interaction
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

16

Forks

Language

Python

License

MIT

Last pushed

May 22, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/jlin816/rewards-from-language"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.