aerosta/rewardhackwatch
Runtime detector for reward hacking and misalignment in LLM agents (89.7% F1 on 5,391 trajectories).
This helps AI developers and researchers monitor their large language model (LLM) agents for unwanted behaviors. It takes an agent's reasoning traces and code outputs as input to identify instances where the agent tries to game its evaluation or exhibit misaligned behavior. The output is a risk assessment, score, and specific detections of these problematic actions.
Use this if you are developing or managing LLM agents and need to automatically detect when they attempt to cheat or manipulate their performance metrics.
Not ideal if you are looking for a general-purpose AI safety tool unrelated to LLM agent reward hacking or misalignment.
Stars
7
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 10, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/aerosta/rewardhackwatch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.