hlt-mt/pangolinn

As a Pangolin looks for bugs and catches them, the goal of this library is ot help developers finding bugs in their neural networks and newly-created models.

30
/ 100
Emerging

This library helps machine learning engineers and researchers ensure their neural network models work correctly before deployment. It takes a newly developed neural network model and a set of predefined tests, then checks if the model behaves as expected under various conditions, such as handling padded input or maintaining causality. The output is a report detailing whether the model passed or failed these crucial checks, helping developers catch bugs early.

No commits in the last 6 months. Available on PyPI.

Use this if you are developing new neural network architectures and need to systematically verify their fundamental properties like padding invariance or causal behavior.

Not ideal if you are a non-developer seeking an automated solution for general machine learning model validation or performance benchmarking.

neural-network-development machine-learning-engineering model-validation deep-learning-research software-quality
Stale 6m
Maintenance 0 / 25
Adoption 5 / 25
Maturity 25 / 25
Community 0 / 25

How are scores calculated?

Stars

13

Forks

Language

Python

License

Apache-2.0

Last pushed

May 18, 2024

Commits (30d)

0

Dependencies

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/hlt-mt/pangolinn"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.