PKU-Alignment/omnisafe
JMLR: OmniSafe is an infrastructural framework for accelerating SafeRL research.
OmniSafe is an infrastructure designed to help researchers accelerate their work in safe reinforcement learning (SafeRL). It provides a comprehensive toolkit and benchmark for developing and testing algorithms that minimize risks and unsafe behaviors in AI systems. The output is robust, safety-aware AI models and policies. It is intended for AI/ML researchers and engineers focused on safety-critical applications.
1,077 stars. No commits in the last 6 months. Available on PyPI.
Use this if you are an AI researcher or developer working on reinforcement learning and need to ensure the safety and reliability of your trained agents.
Not ideal if you are a beginner looking for a simple, out-of-the-box solution for general reinforcement learning tasks without a strong focus on safety constraints.
Stars
1,077
Forks
149
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 17, 2025
Commits (30d)
0
Dependencies
13
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/PKU-Alignment/omnisafe"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
DLR-RM/stable-baselines3
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
google-deepmind/dm_control
Google DeepMind's software stack for physics-based simulation and Reinforcement Learning...
Denys88/rl_games
RL implementations
pytorch/rl
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
yandexdataschool/Practical_RL
A course in reinforcement learning in the wild