Lightning-AI/torchmetrics

Machine learning metrics for distributed, scalable PyTorch applications.

82
/ 100
Verified

When training machine learning models, accurately evaluating their performance can be tricky, especially with large datasets or complex models. This project provides a comprehensive toolkit for calculating standard and custom evaluation metrics during the training of PyTorch models. It takes in predictions and ground truth values from your model and outputs key performance indicators like accuracy, precision, or recall. Machine learning engineers and researchers using PyTorch for model development will find this particularly useful.

2,418 stars. Used by 78 other packages. Actively maintained with 14 commits in the last 30 days. Available on PyPI.

Use this if you are a machine learning engineer or researcher developing PyTorch models and need a robust, standardized way to calculate and track performance metrics, especially in distributed training environments.

Not ideal if you are not working with PyTorch models or primarily need to evaluate models in other machine learning frameworks.

machine-learning-engineering model-evaluation deep-learning pytorch-development distributed-training
Maintenance 17 / 25
Adoption 15 / 25
Maturity 25 / 25
Community 25 / 25

How are scores calculated?

Stars

2,418

Forks

474

Language

Python

License

Apache-2.0

Last pushed

Mar 09, 2026

Commits (30d)

14

Dependencies

4

Reverse dependents

78

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Lightning-AI/torchmetrics"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.