FutureAGI/Xenoverse

Benchmarking general decision-making with open & random worlds

57
/ 100
Established

This tool generates incredibly diverse and random virtual environments to train and test artificial intelligence systems. Instead of using a fixed set of challenges, it creates an unlimited variety of worlds, tasks, and scenarios. This helps AI researchers and developers assess how well their AI can generalize and adapt to entirely new situations, rather than just memorizing solutions to familiar problems.

Available on PyPI.

Use this if you are developing or evaluating AI models and need to ensure they can genuinely adapt to novel, unexpected situations rather than just performing well on a predefined set of tests.

Not ideal if you need a benchmark with a consistent, fixed set of environments for direct comparison with existing, standardized results.

artificial-general-intelligence machine-learning-benchmarking reinforcement-learning-research AI-model-evaluation procedural-content-generation
Maintenance 10 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 16 / 25

How are scores calculated?

Stars

19

Forks

7

Language

Python

License

Apache-2.0

Last pushed

Feb 25, 2026

Commits (30d)

0

Dependencies

7

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/FutureAGI/Xenoverse"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.