KaleabTessera/HyperMARL
Adaptive Hypernetworks for Multi-Agent RL. NeurIPS 2025.
This project helps machine learning researchers and practitioners design and train multi-agent reinforcement learning (MARL) systems where multiple AI agents learn to cooperate or compete. It takes standard MARL setups as input and produces more efficient and flexible learning algorithms, allowing agents to develop diverse or homogeneous behaviors as needed. This is intended for professionals working on advanced AI for simulations, robotics, or complex system control.
Use this if you are developing multi-agent AI systems and need a method to improve training efficiency and prevent agents from converging to suboptimal, uniform behaviors, while maintaining flexibility in agent interactions.
Not ideal if you are working on single-agent reinforcement learning problems or require a solution that dictates specific diversity levels rather than adapting dynamically.
Stars
14
Forks
2
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 23, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/KaleabTessera/HyperMARL"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Toni-SM/skrl
Modular Reinforcement Learning (RL) library (implemented in PyTorch, JAX, and NVIDIA Warp) with...
facebookresearch/BenchMARL
BenchMARL is a library for benchmarking Multi-Agent Reinforcement Learning (MARL). BenchMARL...
utiasDSL/gym-pybullet-drones
PyBullet Gymnasium environments for single and multi-agent reinforcement learning of quadcopter control
datamllab/rlcard
Reinforcement Learning / AI Bots in Card (Poker) Games - Blackjack, Leduc, Texas, DouDizhu, Mahjong, UNO.
proroklab/VectorizedMultiAgentSimulator
VMAS is a vectorized differentiable simulator designed for efficient Multi-Agent Reinforcement...