JaydenTeoh/MORL-Generalization

Benchmark for evaluating the generalization capabilities of Multi-Objective Reinforcement Learning (MORL) algorithms.

27
/ 100
Experimental

This project helps researchers and developers assess how well Multi-Objective Reinforcement Learning (MORL) algorithms can adapt and perform in new, unseen environments. It takes in various MORL algorithms and predefined multi-objective environments, then outputs performance metrics and evaluation data. The primary users are researchers and practitioners working on advanced AI and reinforcement learning applications who need to validate the robustness of their algorithms.

No commits in the last 6 months.

Use this if you are developing or studying multi-objective reinforcement learning algorithms and need a standardized way to test their generalization capabilities across different scenarios.

Not ideal if you are looking for a plug-and-play MORL solution for a specific real-world problem, as this is primarily an evaluation benchmark.

Reinforcement Learning Research Multi-Objective Optimization Algorithm Evaluation AI Generalization Machine Learning Benchmarking
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 10 / 25

How are scores calculated?

Stars

26

Forks

3

Language

Python

License

Last pushed

Jun 06, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/JaydenTeoh/MORL-Generalization"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.