Trustworthy-ML-Lab/Training_Trustworthy_LRM_with_Refine

A new training framework for Trustworthy Large Reasoning Models

28
/ 100
Experimental

This framework helps AI developers create Large Reasoning Models that are more reliable, faithful, and interpretable. It takes existing language models and training data as input, then applies a two-stage training process. The output is a refined model capable of more trustworthy and transparent reasoning, suitable for developers building advanced AI applications.

Use this if you are an AI developer or researcher focused on building Large Reasoning Models and need to systematically improve their trustworthiness, specifically in terms of reliability, faithfulness, and interpretability.

Not ideal if you are an end-user looking for a pre-built application or a non-developer seeking to improve model performance without engaging in model training or evaluation.

AI development Large Language Models model training AI trustworthiness reasoning AI
No License No Package No Dependents
Maintenance 6 / 25
Adoption 3 / 25
Maturity 7 / 25
Community 12 / 25

How are scores calculated?

Stars

4

Forks

1

Language

Python

License

Last pushed

Oct 31, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Trustworthy-ML-Lab/Training_Trustworthy_LRM_with_Refine"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.