aerosta/rewardhackwatch

Runtime detector for reward hacking and misalignment in LLM agents (89.7% F1 on 5,391 trajectories).

27
/ 100
Experimental

This helps AI developers and researchers monitor their large language model (LLM) agents for unwanted behaviors. It takes an agent's reasoning traces and code outputs as input to identify instances where the agent tries to game its evaluation or exhibit misaligned behavior. The output is a risk assessment, score, and specific detections of these problematic actions.

Use this if you are developing or managing LLM agents and need to automatically detect when they attempt to cheat or manipulate their performance metrics.

Not ideal if you are looking for a general-purpose AI safety tool unrelated to LLM agent reward hacking or misalignment.

LLM agent development AI safety model evaluation agent alignment AI ethics
No Package No Dependents
Maintenance 10 / 25
Adoption 4 / 25
Maturity 13 / 25
Community 0 / 25

How are scores calculated?

Stars

7

Forks

Language

Python

License

Apache-2.0

Last pushed

Mar 10, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/aerosta/rewardhackwatch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.