Kaleidophon/deep-significance

Enabling easy statistical significance testing for deep neural networks.

38
/ 100
Emerging

This tool helps machine learning practitioners confidently compare the performance of different deep neural network models. By analyzing multiple runs of your models, it determines if one model truly performs better than another, rather than relying on single-score comparisons that can be misleading due to random chance. This is crucial for anyone developing or evaluating deep learning models in fields like NLP, computer vision, or reinforcement learning.

339 stars. No commits in the last 6 months.

Use this if you need to rigorously prove that a new deep learning model or algorithm is statistically better than an existing one, especially when model performance can be variable.

Not ideal if you are looking for tools to improve model training speed, explore new architectures, or visualize model internals, as its focus is solely on statistical comparison of trained models.

deep-learning-evaluation model-comparison machine-learning-research statistical-analysis neural-network-performance
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

339

Forks

20

Language

Python

License

GPL-3.0

Category

mlr3-ecosystem

Last pushed

Jul 01, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Kaleidophon/deep-significance"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.