uw-nsl/TinyV
Your efficient and accurate answer verification system for RL training.
This project helps improve the training of Large Language Models (LLMs) used for complex reasoning tasks. It takes the output of a rule-based answer verification system during reinforcement learning (RL) training and provides more accurate feedback, specifically by catching answers that were wrongly marked as incorrect. The end result is an LLM that performs better and learns more efficiently, making it valuable for AI researchers and engineers developing sophisticated LLM applications.
No commits in the last 6 months.
Use this if you are an AI researcher or engineer training LLMs with reinforcement learning and need a more accurate reward signal to avoid misclassifying correct answers as wrong.
Not ideal if you are not working with LLMs or do not use reinforcement learning for model training.
Stars
41
Forks
2
Language
Python
License
MIT
Category
Last pushed
Jun 23, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/uw-nsl/TinyV"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
langfengQ/verl-agent
verl-agent is an extension of veRL, designed for training LLM/VLM agents via RL. verl-agent is...
sotopia-lab/sotopia
Sotopia: an Open-ended Social Learning Environment (ICLR 2024 spotlight)
zhudotexe/redel
ReDel is a toolkit for researchers and developers to build, iterate on, and analyze recursive...
TIGER-AI-Lab/verl-tool
A version of verl to support diverse tool use
AMAP-ML/Tree-GRPO
[ICLR 2026] Tree Search for LLM Agent Reinforcement Learning