RLE-Foundation/RLeXplore

RLeXplore provides stable baselines of exploration methods in reinforcement learning, such as intrinsic curiosity module (ICM), random network distillation (RND) and rewarding impact-driven exploration (RIDE).

39
/ 100
Emerging

This toolkit helps researchers in reinforcement learning (RL) accelerate their work by providing a standardized way to compare and implement intrinsic reward algorithms. It takes various intrinsic reward module configurations as input and outputs benchmarked results and well-structured code for easy integration into RL projects. This is specifically designed for RL researchers and practitioners focusing on exploration in complex environments.

459 stars. No commits in the last 6 months.

Use this if you are an RL researcher developing or evaluating new intrinsically-motivated exploration methods and need a consistent, reliable framework for implementation and comparison.

Not ideal if you are looking for a general-purpose reinforcement learning library without a specific focus on intrinsic exploration rewards or a beginner seeking a high-level API for basic RL tasks.

reinforcement-learning deep-learning-research exploration-algorithms ai-development autonomous-systems
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

459

Forks

23

Language

Jupyter Notebook

License

MIT

Last pushed

Apr 04, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/RLE-Foundation/RLeXplore"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.