NVIDIA/framework-reproducibility
Providing reproducibility in deep learning frameworks
When training deep learning models, slight variations in results between runs can make it hard to confidently compare different models or hyperparameter settings. This project helps scientists and machine learning engineers achieve consistent, bit-accurate results from their deep learning model training, especially on GPUs. It provides documentation, patches, and tools to ensure that if you run the same training setup twice, you'll get the exact same outcome.
434 stars. No commits in the last 6 months. Available on PyPI.
Use this if you are a deep learning practitioner struggling with non-deterministic model training runs and need to ensure identical, reproducible results for rigorous scientific comparison or debugging.
Not ideal if you are looking for general code reproducibility across different computing environments rather than bit-accurate run-to-run consistency within deep learning frameworks.
Stars
434
Forks
38
Language
Python
License
Apache-2.0
Category
Last pushed
May 13, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/NVIDIA/framework-reproducibility"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
pymc-devs/pytensor
PyTensor allows you to define, optimize, and efficiently evaluate mathematical expressions...
arogozhnikov/einops
Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)
lava-nc/lava-dl
Deep Learning library for Lava
tensorly/tensorly
TensorLy: Tensor Learning in Python.
tensorpack/tensorpack
A Neural Net Training Interface on TensorFlow, with focus on speed + flexibility