ryoungj/ToolEmu

[ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use

40
/ 100
Emerging

This project helps evaluate Language Model (LM) agents that use external tools to ensure they operate safely and effectively. It takes descriptions of tools and test scenarios, then simulates how an LM agent would interact with them. The output reveals potential risks like data leaks or incorrect actions, along with a helpfulness score. This is for AI product managers, trust & safety engineers, or large language model researchers developing and deploying LM agents.

192 stars. No commits in the last 6 months.

Use this if you need to rapidly test and identify safety risks and performance issues in LM agents that interact with various tools, without needing to implement the tools themselves.

Not ideal if you are looking for a general-purpose testing framework for traditional software applications or if your LM agent does not rely on external tools.

AI Safety LLM Agent Development Risk Assessment Automated Testing Responsible AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

192

Forks

20

Language

Python

License

Apache-2.0

Last pushed

Mar 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/ryoungj/ToolEmu"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.