Avalon-Benchmark/avalon

A 3D video game environment and benchmark designed from scratch for reinforcement learning research

40
/ 100
Emerging

Avalon is a 3D video game environment designed to help AI researchers test and develop reinforcement learning (RL) agents. It provides a consistent setup where RL agents, like virtual robots or characters, learn to solve tasks such as navigating, hunting, or gathering within procedurally generated virtual worlds. Researchers input their RL algorithms and receive performance metrics, observations, and agent actions, allowing them to assess how well their agents generalize learned skills.

190 stars. No commits in the last 6 months.

Use this if you are an AI researcher developing or evaluating reinforcement learning algorithms and need a challenging, consistent 3D environment to test agent generalization across diverse tasks.

Not ideal if you are looking for a simple, pre-trained RL agent for a specific task rather than a platform for fundamental RL research and benchmarking.

AI-research Reinforcement-Learning Generalization-Benchmarking Agent-Training Procedural-Content-Generation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

190

Forks

18

Language

Jupyter Notebook

License

GPL-3.0

Last pushed

May 03, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Avalon-Benchmark/avalon"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.