FasterAI-Labs/fasterai
FasterAI: Prune and Distill your models with FastAI and PyTorch
This tool helps machine learning engineers optimize their neural networks to be smaller and faster. It takes an existing PyTorch-based model and applies various compression techniques. The output is a more efficient model that maintains its performance, ideal for deployment on edge devices or for reducing computational costs.
253 stars. Available on PyPI.
Use this if you need to deploy large neural networks on resource-constrained devices, reduce inference time, or lower the energy consumption of your AI models.
Not ideal if you are not working with PyTorch models or if your primary concern is developing new, unoptimized models rather than enhancing existing ones.
Stars
253
Forks
19
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Feb 06, 2026
Commits (30d)
0
Dependencies
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/FasterAI-Labs/fasterai"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
open-mmlab/mmengine
OpenMMLab Foundational Library for Training Deep Learning Models
Xilinx/brevitas
Brevitas: neural network quantization in PyTorch
google/qkeras
QKeras: a quantization deep learning library for Tensorflow Keras
fastmachinelearning/qonnx
QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX
tensorflow/model-optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization...