uw-nsl/TinyV

Your efficient and accurate answer verification system for RL training.

29
/ 100
Experimental

This project helps improve the training of Large Language Models (LLMs) used for complex reasoning tasks. It takes the output of a rule-based answer verification system during reinforcement learning (RL) training and provides more accurate feedback, specifically by catching answers that were wrongly marked as incorrect. The end result is an LLM that performs better and learns more efficiently, making it valuable for AI researchers and engineers developing sophisticated LLM applications.

No commits in the last 6 months.

Use this if you are an AI researcher or engineer training LLMs with reinforcement learning and need a more accurate reward signal to avoid misclassifying correct answers as wrong.

Not ideal if you are not working with LLMs or do not use reinforcement learning for model training.

LLM-training reinforcement-learning AI-research model-verification language-model-development
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 5 / 25

How are scores calculated?

Stars

41

Forks

2

Language

Python

License

MIT

Last pushed

Jun 23, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/uw-nsl/TinyV"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.