jonathanmli/Avalon-LLM

This repository contains a LLM benchmark for the social deduction game `Resistance Avalon'

33
/ 100
Emerging

This project helps researchers and developers evaluate how well large language models (LLMs) play complex social deduction games like 'The Resistance: Avalon'. You provide the LLMs you want to test and the game configurations, and the system simulates games, producing results on how effectively the LLMs deduce, collaborate, and deceive. This is designed for AI researchers and developers focused on multi-agent systems and LLM strategic reasoning.

141 stars. No commits in the last 6 months.

Use this if you need to benchmark the strategic capabilities, deductive reasoning, and social interaction skills of LLMs in a multi-agent, game-theoretic environment.

Not ideal if you're looking for a tool to play 'The Resistance: Avalon' for entertainment or to analyze human gameplay strategies.

AI-evaluation multi-agent-systems game-AI LLM-benchmarking strategic-reasoning
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 13 / 25

How are scores calculated?

Stars

141

Forks

14

Language

Python

License

Last pushed

May 30, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/jonathanmli/Avalon-LLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.