mourga/transformer-uncertainty

Code for evaluating uncertainty estimation methods for Transformer-based architectures in natural language understanding tasks.

24
/ 100
Experimental

This project helps machine learning engineers and researchers assess how reliable predictions are from large language models, specifically Transformer-based architectures. It takes a trained Transformer model and natural language understanding datasets, then applies various uncertainty estimation techniques to quantify the confidence in the model's outputs. The result helps you understand when your model might be wrong, allowing for more robust deployment decisions.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher who needs to rigorously evaluate the trustworthiness and confidence levels of your Transformer models on natural language understanding tasks.

Not ideal if you are looking for a pre-packaged, production-ready solution for adding uncertainty estimation to a deployed NLP application, or if you are not comfortable with machine learning research workflows.

natural-language-understanding model-evaluation AI-safety predictive-confidence machine-learning-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

44

Forks

Language

Python

License

MIT

Last pushed

Aug 16, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mourga/transformer-uncertainty"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.