Trustworthy-ML-Lab/Training_Trustworthy_LRM_with_Refine
A new training framework for Trustworthy Large Reasoning Models
This framework helps AI developers create Large Reasoning Models that are more reliable, faithful, and interpretable. It takes existing language models and training data as input, then applies a two-stage training process. The output is a refined model capable of more trustworthy and transparent reasoning, suitable for developers building advanced AI applications.
Use this if you are an AI developer or researcher focused on building Large Reasoning Models and need to systematically improve their trustworthiness, specifically in terms of reliability, faithfulness, and interpretability.
Not ideal if you are an end-user looking for a pre-built application or a non-developer seeking to improve model performance without engaging in model training or evaluation.
Stars
4
Forks
1
Language
Python
License
—
Category
Last pushed
Oct 31, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Trustworthy-ML-Lab/Training_Trustworthy_LRM_with_Refine"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
Trusted-AI/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations...
holistic-ai/holisticai
This is an open-source tool to assess and improve the trustworthiness of AI systems.
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment...
datamllab/awesome-fairness-in-ai
A curated list of awesome Fairness in AI resources