LukasHedegaard/pytorch-benchmark

Easily benchmark PyTorch model FLOPs, latency, throughput, allocated gpu memory and energy consumption

47
/ 100
Emerging

This tool helps machine learning engineers and researchers compare the efficiency of different PyTorch models. By providing your model and a sample input, it measures key performance metrics like floating-point operations (FLOPs), inference speed (latency and throughput), and GPU memory usage. This allows you to understand how well your models will perform in a production environment or on resource-constrained devices.

109 stars. Used by 1 other package. No commits in the last 6 months. Available on PyPI.

Use this if you need to objectively compare multiple PyTorch models based on their computational cost and speed, or optimize an existing model for better performance.

Not ideal if you're looking for deep profiling tools that identify specific bottlenecks within your model's code, or if you are not working with PyTorch models.

machine-learning-engineering deep-learning-optimization model-performance resource-management
Stale 6m
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 12 / 25

How are scores calculated?

Stars

109

Forks

11

Language

Python

License

Apache-2.0

Last pushed

Aug 25, 2023

Commits (30d)

0

Dependencies

8

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/LukasHedegaard/pytorch-benchmark"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.