jonathanmli/Avalon-LLM
This repository contains a LLM benchmark for the social deduction game `Resistance Avalon'
This project helps researchers and developers evaluate how well large language models (LLMs) play complex social deduction games like 'The Resistance: Avalon'. You provide the LLMs you want to test and the game configurations, and the system simulates games, producing results on how effectively the LLMs deduce, collaborate, and deceive. This is designed for AI researchers and developers focused on multi-agent systems and LLM strategic reasoning.
141 stars. No commits in the last 6 months.
Use this if you need to benchmark the strategic capabilities, deductive reasoning, and social interaction skills of LLMs in a multi-agent, game-theoretic environment.
Not ideal if you're looking for a tool to play 'The Resistance: Avalon' for entertainment or to analyze human gameplay strategies.
Stars
141
Forks
14
Language
Python
License
—
Category
Last pushed
May 30, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/jonathanmli/Avalon-LLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
open-compass/opencompass
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral,...
IBM/unitxt
🦄 Unitxt is a Python library for enterprise-grade evaluation of AI performance, offering the...
lean-dojo/LeanDojo
Tool for data extraction and interacting with Lean programmatically.
GoodStartLabs/AI_Diplomacy
Frontier Models playing the board game Diplomacy.
google/litmus
Litmus is a comprehensive LLM testing and evaluation tool designed for GenAI Application...