CodeMaxx/Safe-RL-Pacman
Reinforcement Learning Course Project - IIT Bombay Fall 2018
This project helps researchers and developers in reinforcement learning to build AI agents that prioritize safety during training and deployment. It takes an existing reinforcement learning environment and a safety rule (like "Pacman should not enter a 'dead state'") and produces a shielded version of that environment. Researchers and AI engineers can use this to ensure agents avoid undesirable outcomes without compromising learning performance.
No commits in the last 6 months.
Use this if you are developing reinforcement learning agents and need a way to guarantee they adhere to critical safety constraints from the start, preventing unsafe actions during both the learning process and actual operation.
Not ideal if your primary goal is only to optimize an agent's performance score without any specific safety requirements or if you are working with environments where safety is not a concern.
Stars
9
Forks
3
Language
Python
License
—
Category
Last pushed
Nov 25, 2018
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/CodeMaxx/Safe-RL-Pacman"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
PetropoulakisPanagiotis/pacman-projects
Solutions of 1 and 2 Pacman projects of Berkeley AI course
zhiming-xu/CS188
Introduction to AI course assignment at Berkeley in spring 2019
abhinavcreed13/ai-capture-the-flag-pacman-contest
The course contest involves a multi-player capture-the-flag variant of Pacman, where agents...
aguunu/fishing-jigsaw
Compute optimal actions for a specific state of the Metin2 fishing jigsaw making use of...
iamjagdeesh/Artificial-Intelligence-Pac-Man
CSE 571 Artificial Intelligence