zjunlp/TRICE

[NAACL 2024] Making Language Models Better Tool Learners with Execution Feedback

31
/ 100
Emerging

This project helps machine learning researchers and practitioners enhance how Large Language Models (LLMs) utilize external tools. It provides a two-stage training framework that takes existing LLM models and instruction-tuning datasets as input. The output is a more capable LLM that can selectively and effectively use tools based on execution feedback.

No commits in the last 6 months.

Use this if you are a researcher or ML engineer working to improve the reliability and accuracy of LLMs when they interact with and use external software tools or APIs.

Not ideal if you are looking for an off-the-shelf, plug-and-play solution for end-users to apply LLMs directly in business applications without further development.

LLM training tool learning reinforcement learning model fine-tuning AI research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

43

Forks

3

Language

Python

License

MIT

Last pushed

Mar 14, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/zjunlp/TRICE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.