aman179102/trust-aware

A trust-aware, human-in-the-loop AI decision system that knows when not to trust model confidence.

32
/ 100
Emerging

This system helps organizations safely automate text analysis by identifying when an AI model might be wrong, even if it seems confident. You provide text, and it determines if the AI can confidently process it or if a human needs to review it, along with an explanation. It's ideal for anyone managing content, customer interactions, or data where AI mistakes could be costly.

Use this if you need to automate sentiment analysis or similar text classification tasks but want to ensure that ambiguous or risky inputs are always flagged for human review, preventing confident but incorrect AI decisions.

Not ideal if your workflow requires full automation without any human oversight for text analysis, or if you need to train your own custom machine learning model rather than using a pre-trained one.

content-moderation customer-service-automation sentiment-analysis data-quality-assurance compliance-review
No License No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 3 / 25
Community 14 / 25

How are scores calculated?

Stars

9

Forks

3

Language

Python

License

Last pushed

Feb 02, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/aman179102/trust-aware"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.