BARL-SSL/reptrix

Library that provides metrics to assess representation quality

31
/ 100
Emerging

This library helps machine learning researchers and engineers evaluate how well their deep learning models learn meaningful features from images. You input features extracted from your image data using a pretrained model, and it outputs numerical scores like alpha, RankMe, and LiDAR. These scores indicate the quality, capacity, and separability of the learned representations, especially for models trained with self-supervised learning.

No commits in the last 6 months. Available on PyPI.

Use this if you are a machine learning practitioner working with computer vision models, particularly those trained with self-supervised learning, and need to quantitatively assess the quality and interpretability of your learned representations.

Not ideal if your primary goal is to train models from scratch or if you are not working with deep learning models and learned representations in computer vision.

computer-vision deep-learning representation-learning model-evaluation self-supervised-learning
Stale 6m
Maintenance 0 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 0 / 25

How are scores calculated?

Stars

24

Forks

Language

Jupyter Notebook

License

MIT

Last pushed

Feb 05, 2025

Commits (30d)

0

Dependencies

6

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/BARL-SSL/reptrix"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.