JulesBelveze/nhelper

🧪 Behavioral testing of NLP models 🧪

42
/ 100
Emerging

This tool helps NLP developers rigorously test their natural language processing models. It takes an NLP model and various text inputs, then generates perturbed versions of these inputs to see how the model behaves. The output highlights potential weaknesses or unexpected reactions, which helps developers ensure their models are robust before deployment.

No commits in the last 6 months. Available on PyPI.

Use this if you are an NLP developer who needs to thoroughly check the reliability and fairness of your models against real-world data variations.

Not ideal if you are looking for general model evaluation metrics or performance benchmarks, as this focuses specifically on behavioral stress-testing.

NLP-development model-testing ML-ops text-processing quality-assurance
Stale 6m
Maintenance 0 / 25
Adoption 4 / 25
Maturity 25 / 25
Community 13 / 25

How are scores calculated?

Stars

7

Forks

2

Language

Python

License

MIT

Last pushed

Apr 30, 2023

Commits (30d)

0

Dependencies

8

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/JulesBelveze/nhelper"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.