FLAIROx/ah2ac2

Ad-Hoc Human-AI Coordination Challenge (AH2AC2)

20
/ 100
Experimental

This challenge helps AI researchers develop and benchmark AI agents that can effectively work with humans, even when they haven't interacted much before. You submit your AI agent's code, which is then evaluated against various human-like partners to measure its coordination abilities. This is for AI researchers and machine learning engineers focused on human-AI collaboration and multi-agent systems.

No commits in the last 6 months.

Use this if you are an AI researcher developing agents that need to collaborate seamlessly with unfamiliar human partners in real-time scenarios.

Not ideal if you are looking for a pre-built, production-ready AI agent for immediate deployment in a specific application.

Human-AI Collaboration Multi-Agent Systems AI Benchmarking Reinforcement Learning Research AI Agent Development
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 7 / 25
Community 6 / 25

How are scores calculated?

Stars

12

Forks

1

Language

Python

License

Last pushed

Jul 08, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/FLAIROx/ah2ac2"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.