camelop/NLP-Robustness

OOD Generalization and Detection (ACL 2020)

31
/ 100
Emerging

This project helps machine learning researchers and NLP practitioners evaluate and improve how well their language models generalize to new, unseen data that differs from their training data. It takes in existing NLP models, particularly those built with pretrained transformers like BERT, and outputs metrics on their accuracy and ability to detect when data is outside their usual scope. This is crucial for anyone deploying NLP models in dynamic, real-world environments.

No commits in the last 6 months.

Use this if you need to understand and enhance the reliability of your NLP models when faced with unexpected or novel text inputs.

Not ideal if you are looking for a general-purpose NLP library for common tasks like sentiment analysis or text summarization without a focus on out-of-distribution performance.

natural-language-processing machine-learning-research model-robustness out-of-distribution-detection nlp-evaluation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

59

Forks

9

Language

Python

License

Last pushed

Apr 15, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/camelop/NLP-Robustness"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.