PKU-Alignment/omnisafe

JMLR: OmniSafe is an infrastructural framework for accelerating SafeRL research.

57
/ 100
Established

OmniSafe is an infrastructure designed to help researchers accelerate their work in safe reinforcement learning (SafeRL). It provides a comprehensive toolkit and benchmark for developing and testing algorithms that minimize risks and unsafe behaviors in AI systems. The output is robust, safety-aware AI models and policies. It is intended for AI/ML researchers and engineers focused on safety-critical applications.

1,077 stars. No commits in the last 6 months. Available on PyPI.

Use this if you are an AI researcher or developer working on reinforcement learning and need to ensure the safety and reliability of your trained agents.

Not ideal if you are a beginner looking for a simple, out-of-the-box solution for general reinforcement learning tasks without a strong focus on safety constraints.

safe-ai reinforcement-learning-research ai-safety machine-learning-engineering robotics-control
Stale 6m
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 22 / 25

How are scores calculated?

Stars

1,077

Forks

149

Language

Python

License

Apache-2.0

Last pushed

Mar 17, 2025

Commits (30d)

0

Dependencies

13

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/PKU-Alignment/omnisafe"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.