psunlpgroup/FoVer

This repository includes code and materials for the paper "Generalizable Process Reward Models via Formally Verified Training Data".

22
/ 100
Experimental

This project helps AI researchers and developers improve the reasoning abilities of large language models (LLMs). It provides a framework that automatically generates high-quality training data for Process Reward Models (PRMs). By inputting formal logic problems, it outputs precise, step-by-step error labels, allowing for more efficient and accurate LLM fine-tuning.

No commits in the last 6 months.

Use this if you are developing or fine-tuning LLMs and need a more efficient, less costly way to generate accurate supervision for improving their logical and mathematical reasoning.

Not ideal if you are a general LLM user or a practitioner looking for a ready-to-use application, as this is a developer tool for model training.

LLM training AI model development reasoning AI natural language processing computational logic
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 15 / 25
Community 0 / 25

How are scores calculated?

Stars

11

Forks

Language

Python

License

Last pushed

Sep 30, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/psunlpgroup/FoVer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.