mjalali/embedding-comparison

[ICML 2025] Official implementation of SPEC method for interpretable embedding comparison. paper: Towards an Explainable Comparison and Alignment of Feature Embeddings

33
/ 100
Emerging

This tool helps researchers and data scientists understand how different AI models, like DINOv2 or CLIP, interpret and group similar images or data points. It takes pre-computed feature embeddings from two models and a corresponding dataset of images, then identifies specific clusters of data that one model groups together differently than the other. The output includes visual plots showing these differences and sample images for each identified cluster, providing a clear explanation of where and how two models diverge in their understanding.

No commits in the last 6 months.

Use this if you need to explain why two different AI models produce varying results on the same dataset, beyond just looking at numerical accuracy scores.

Not ideal if you're looking for a simple pass/fail metric for model performance or if your primary goal is to train a new embedding model from scratch.

AI-model-comparison image-recognition feature-analysis model-interpretability computer-vision
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 15 / 25
Community 10 / 25

How are scores calculated?

Stars

15

Forks

2

Language

Python

License

MIT

Last pushed

Oct 02, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/mjalali/embedding-comparison"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.