AlgTUDelft/AlwaysSafe

Code for the paper "AlwaysSafe: Reinforcement Learning Without Safety Constraint Violations During Training"

37
/ 100
Emerging

This code helps researchers and practitioners in reinforcement learning develop and test AI agents that learn to perform tasks without ever violating critical safety rules during their training. It takes in a defined environment and safety constraints, and outputs a trained agent that adheres to these constraints throughout its learning process. This is for AI researchers and developers focused on building safe and reliable autonomous systems.

No commits in the last 6 months.

Use this if you are developing reinforcement learning agents for safety-critical applications and need to ensure that no safety violations occur during the agent's training phase.

Not ideal if you are looking for a general-purpose reinforcement learning framework where safety during training is not a primary concern, or if you require an off-the-shelf solution without custom environment integration.

safe-reinforcement-learning autonomous-systems AI-safety robotics-control optimal-control
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

17

Forks

4

Language

Python

License

MIT

Last pushed

May 09, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/AlgTUDelft/AlwaysSafe"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.