aman179102/trust-aware
A trust-aware, human-in-the-loop AI decision system that knows when not to trust model confidence.
This system helps organizations safely automate text analysis by identifying when an AI model might be wrong, even if it seems confident. You provide text, and it determines if the AI can confidently process it or if a human needs to review it, along with an explanation. It's ideal for anyone managing content, customer interactions, or data where AI mistakes could be costly.
Use this if you need to automate sentiment analysis or similar text classification tasks but want to ensure that ambiguous or risky inputs are always flagged for human review, preventing confident but incorrect AI decisions.
Not ideal if your workflow requires full automation without any human oversight for text analysis, or if you need to train your own custom machine learning model rather than using a pre-trained one.
Stars
9
Forks
3
Language
Python
License
—
Category
Last pushed
Feb 02, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/aman179102/trust-aware"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
Trusted-AI/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations...
holistic-ai/holisticai
This is an open-source tool to assess and improve the trustworthiness of AI systems.
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment...
datamllab/awesome-fairness-in-ai
A curated list of awesome Fairness in AI resources