sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
This tool helps you rigorously test how well your customer service AI agents perform in real-world scenarios. You provide your AI agent, and it simulates conversations, either text-based or voice-based, within domains like airline, retail, or banking customer support. The output is a detailed evaluation of your agent's adherence to policies, use of tools, and overall task completion. This is designed for AI product managers, contact center operations managers, or anyone responsible for the quality and effectiveness of conversational AI agents.
829 stars. Actively maintained with 61 commits in the last 30 days.
Use this if you need to objectively benchmark the performance of your conversational AI agents against defined tasks and policies, ensuring they meet operational standards before deployment or for continuous improvement.
Not ideal if you are looking for a simple, quick way to test basic conversational flows or if your primary goal is to train an agent from scratch without needing detailed performance metrics against specific real-world tasks.
Stars
829
Forks
210
Language
Python
License
MIT
Category
Last pushed
Mar 11, 2026
Commits (30d)
61
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/sierra-research/tau2-bench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related tools
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems
swefficiency/swefficiency
Benchmark harness and code for "SWE-fficiency: Can Language Models Optimize Real World...