ryoungj/ToolEmu
[ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use
This project helps evaluate Language Model (LM) agents that use external tools to ensure they operate safely and effectively. It takes descriptions of tools and test scenarios, then simulates how an LM agent would interact with them. The output reveals potential risks like data leaks or incorrect actions, along with a helpfulness score. This is for AI product managers, trust & safety engineers, or large language model researchers developing and deploying LM agents.
192 stars. No commits in the last 6 months.
Use this if you need to rapidly test and identify safety risks and performance issues in LM agents that interact with various tools, without needing to implement the tools themselves.
Not ideal if you are looking for a general-purpose testing framework for traditional software applications or if your LM agent does not rely on external tools.
Stars
192
Forks
20
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 22, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/ryoungj/ToolEmu"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
microsoft/promptbench
A unified evaluation framework for large language models
uptrain-ai/uptrain
UpTrain is an open-source unified platform to evaluate and improve Generative AI applications....
levitation-opensource/Manipulative-Expression-Recognition
MER is a software that identifies and highlights manipulative communication in text from human...
microsoftarchive/promptbench
A unified evaluation framework for large language models
gabe-mousa/Apolien
AI Safety Evaluation Library