Azure/MS-AMP
Microsoft Automatic Mixed Precision Library
This library helps deep learning engineers train large language models more efficiently. It automatically optimizes the precision of computations during model training, reducing memory usage and speeding up the process. Researchers and practitioners working with deep neural networks, especially large language models, would find this beneficial for faster experimentation and deployment.
634 stars.
Use this if you are a deep learning engineer or researcher looking to significantly accelerate the training of large language models while minimizing computational resources.
Not ideal if you are not working with deep learning models or do not require specialized precision optimization for model training.
Stars
634
Forks
49
Language
Python
License
MIT
Category
Last pushed
Dec 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Azure/MS-AMP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVIDIA/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit...
mlcommons/inference
Reference implementations of MLPerf® inference benchmarks
mlcommons/training
Reference implementations of MLPerf® training benchmarks
datamade/usaddress
:us: a python library for parsing unstructured United States address strings into address components
GRAAL-Research/deepparse
Deepparse is a state-of-the-art library for parsing multinational street addresses using deep learning