thinkwee/NOVER

[EMNLP-2025] R1-Zero on ANY TASK

24
/ 100
Experimental

This project helps AI engineers and researchers improve how their language models reason and generate text, particularly for complex tasks beyond just math or coding. You provide a standard dataset with prompts and expected answers, and the project trains your language model to produce better, more logical responses without needing a separate 'checker' tool. The end result is a more capable language model that can handle a wider variety of reasoning-intensive text-to-text tasks.

Use this if you need to train or fine-tune a language model to exhibit stronger reasoning capabilities across diverse text-based tasks, using only your existing supervised fine-tuning data.

Not ideal if you are looking for a pre-trained model to use directly, rather than a framework for training your own custom language models.

AI model training natural language processing language model fine-tuning reasoning AI text generation
No License No Package No Dependents
Maintenance 6 / 25
Adoption 7 / 25
Maturity 7 / 25
Community 4 / 25

How are scores calculated?

Stars

28

Forks

1

Language

Python

License

Last pushed

Nov 09, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/thinkwee/NOVER"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.