JulesBelveze/nhelper
🧪 Behavioral testing of NLP models 🧪
This tool helps NLP developers rigorously test their natural language processing models. It takes an NLP model and various text inputs, then generates perturbed versions of these inputs to see how the model behaves. The output highlights potential weaknesses or unexpected reactions, which helps developers ensure their models are robust before deployment.
No commits in the last 6 months. Available on PyPI.
Use this if you are an NLP developer who needs to thoroughly check the reliability and fairness of your models against real-world data variations.
Not ideal if you are looking for general model evaluation metrics or performance benchmarks, as this focuses specifically on behavioral stress-testing.
Stars
7
Forks
2
Language
Python
License
MIT
Category
Last pushed
Apr 30, 2023
Commits (30d)
0
Dependencies
8
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/JulesBelveze/nhelper"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
natasha/ipymarkup
NER, syntax markup visualizations
neomatrix369/nlp_profiler
A simple NLP library allows profiling datasets with one or more text columns. When given a...
thepushkarp/nalcos
Search Git commits in natural language
lyeoni/nlp-tutorial
A list of NLP(Natural Language Processing) tutorials
NirantK/NLP_Quickbook
NLP in Python with Deep Learning