camelop/NLP-Robustness
OOD Generalization and Detection (ACL 2020)
This project helps machine learning researchers and NLP practitioners evaluate and improve how well their language models generalize to new, unseen data that differs from their training data. It takes in existing NLP models, particularly those built with pretrained transformers like BERT, and outputs metrics on their accuracy and ability to detect when data is outside their usual scope. This is crucial for anyone deploying NLP models in dynamic, real-world environments.
No commits in the last 6 months.
Use this if you need to understand and enhance the reliability of your NLP models when faced with unexpected or novel text inputs.
Not ideal if you are looking for a general-purpose NLP library for common tasks like sentiment analysis or text summarization without a focus on out-of-distribution performance.
Stars
59
Forks
9
Language
Python
License
—
Category
Last pushed
Apr 15, 2020
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/camelop/NLP-Robustness"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
VectorInstitute/odyssey
A toolkit for developing foundation models using Electronic Health Record (EHR) data.
ycq091044/BIOT
BIOT - A framework for pretraining biosignals at scale. Large EEG pre-trained models.
AntixK/PyTorch-Model-Compare
Compare neural networks by their feature similarity
woodRock/fishy-business
Machine Learning for Rapid Evaporative Ionization Mass Spectrometry for Marine Biomass Analysis...
soda-inria/carte
Repository for CARTE: Context-Aware Representation of Table Entries