Kaleidophon/nlp-uncertainty-zoo

Model zoo for different kinds of uncertainty quantification methods used in Natural Language Processing, implemented in PyTorch.

34
/ 100
Emerging

This project offers a collection of pre-built models specifically designed to quantify how confident a Natural Language Processing (NLP) model is in its predictions. It takes raw text data as input and outputs not only the NLP model's prediction (like a classification or a generated sequence) but also a score indicating the uncertainty of that prediction. Researchers and engineers working on advanced NLP applications would use this to build more reliable and transparent AI systems.

No commits in the last 6 months. Available on PyPI.

Use this if you need to understand or improve the reliability of your NLP model's predictions by quantifying the uncertainty associated with them.

Not ideal if you are looking for a simple, off-the-shelf NLP solution that doesn't require deep understanding or customization of uncertainty quantification methods.

Natural Language Processing NLP Model Evaluation AI Reliability Predictive Confidence Machine Learning Research
No License Stale 6m
Maintenance 0 / 25
Adoption 8 / 25
Maturity 17 / 25
Community 9 / 25

How are scores calculated?

Stars

55

Forks

4

Language

Python

License

Last pushed

May 05, 2023

Commits (30d)

0

Dependencies

16

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Kaleidophon/nlp-uncertainty-zoo"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.