Lightning-AI/torchmetrics
Machine learning metrics for distributed, scalable PyTorch applications.
When training machine learning models, accurately evaluating their performance can be tricky, especially with large datasets or complex models. This project provides a comprehensive toolkit for calculating standard and custom evaluation metrics during the training of PyTorch models. It takes in predictions and ground truth values from your model and outputs key performance indicators like accuracy, precision, or recall. Machine learning engineers and researchers using PyTorch for model development will find this particularly useful.
2,418 stars. Used by 78 other packages. Actively maintained with 14 commits in the last 30 days. Available on PyPI.
Use this if you are a machine learning engineer or researcher developing PyTorch models and need a robust, standardized way to calculate and track performance metrics, especially in distributed training environments.
Not ideal if you are not working with PyTorch models or primarily need to evaluate models in other machine learning frameworks.
Stars
2,418
Forks
474
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 09, 2026
Commits (30d)
14
Dependencies
4
Reverse dependents
78
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Lightning-AI/torchmetrics"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
pytorch/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
keras-team/keras
Deep Learning for humans
Lightning-AI/pytorch-lightning
Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
lanpa/tensorboardX
tensorboard for pytorch (and chainer, mxnet, numpy, ...)
rwth-i6/returnn
The RWTH extensible training framework for universal recurrent neural networks