FasterAI-Labs/fasterai

FasterAI: Prune and Distill your models with FastAI and PyTorch

58
/ 100
Established

This tool helps machine learning engineers optimize their neural networks to be smaller and faster. It takes an existing PyTorch-based model and applies various compression techniques. The output is a more efficient model that maintains its performance, ideal for deployment on edge devices or for reducing computational costs.

253 stars. Available on PyPI.

Use this if you need to deploy large neural networks on resource-constrained devices, reduce inference time, or lower the energy consumption of your AI models.

Not ideal if you are not working with PyTorch models or if your primary concern is developing new, unoptimized models rather than enhancing existing ones.

model-optimization edge-ai deep-learning-deployment computational-efficiency neural-network-compression
Maintenance 10 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 13 / 25

How are scores calculated?

Stars

253

Forks

19

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Feb 06, 2026

Commits (30d)

0

Dependencies

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/FasterAI-Labs/fasterai"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.