CodeMaxx/Safe-RL-Pacman

Reinforcement Learning Course Project - IIT Bombay Fall 2018

27
/ 100
Experimental

This project helps researchers and developers in reinforcement learning to build AI agents that prioritize safety during training and deployment. It takes an existing reinforcement learning environment and a safety rule (like "Pacman should not enter a 'dead state'") and produces a shielded version of that environment. Researchers and AI engineers can use this to ensure agents avoid undesirable outcomes without compromising learning performance.

No commits in the last 6 months.

Use this if you are developing reinforcement learning agents and need a way to guarantee they adhere to critical safety constraints from the start, preventing unsafe actions during both the learning process and actual operation.

Not ideal if your primary goal is only to optimize an agent's performance score without any specific safety requirements or if you are working with environments where safety is not a concern.

reinforcement-learning AI-safety autonomous-systems agent-development robotics-control
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 14 / 25

How are scores calculated?

Stars

9

Forks

3

Language

Python

License

Last pushed

Nov 25, 2018

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/CodeMaxx/Safe-RL-Pacman"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.