stable-baselines3 and stable-baselines3-contrib

The contrib package extends the main library with experimental RL algorithms and features, making them complements designed to be used together rather than alternatives.

stable-baselines3
76
Verified
stable-baselines3-contrib
64
Established
Maintenance 13/25
Adoption 15/25
Maturity 25/25
Community 23/25
Maintenance 13/25
Adoption 10/25
Maturity 16/25
Community 25/25
Stars: 12,878
Forks: 2,081
Downloads:
Commits (30d): 2
Language: Python
License: MIT
Stars: 693
Forks: 232
Downloads:
Commits (30d): 5
Language: Python
License: MIT
No risk flags
No Package No Dependents

About stable-baselines3

DLR-RM/stable-baselines3

PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.

This is a tool for machine learning researchers and practitioners working with Reinforcement Learning (RL). It provides reliable, tested implementations of various RL algorithms. You input a defined environment and an RL algorithm, and it outputs a trained agent that can learn to make decisions within that environment.

Reinforcement Learning AI Research Algorithm Prototyping Decision Making Systems Agent Training

About stable-baselines3-contrib

Stable-Baselines-Team/stable-baselines3-contrib

Contrib package for Stable-Baselines3 - Experimental reinforcement learning (RL) code

This project provides experimental reinforcement learning (RL) algorithms and tools for tasks like training agents to play games, control robots, or optimize complex systems. It takes in environment observations and outputs optimized decision-making policies. This is for machine learning researchers and practitioners who want to explore cutting-edge RL techniques.

reinforcement-learning experimental-ml agent-training algorithm-research ml-prototyping

Scores updated daily from GitHub, PyPI, and npm data. How scores work