mjalali/embedding-comparison
[ICML 2025] Official implementation of SPEC method for interpretable embedding comparison. paper: Towards an Explainable Comparison and Alignment of Feature Embeddings
This tool helps researchers and data scientists understand how different AI models, like DINOv2 or CLIP, interpret and group similar images or data points. It takes pre-computed feature embeddings from two models and a corresponding dataset of images, then identifies specific clusters of data that one model groups together differently than the other. The output includes visual plots showing these differences and sample images for each identified cluster, providing a clear explanation of where and how two models diverge in their understanding.
No commits in the last 6 months.
Use this if you need to explain why two different AI models produce varying results on the same dataset, beyond just looking at numerical accuracy scores.
Not ideal if you're looking for a simple pass/fail metric for model performance or if your primary goal is to train a new embedding model from scratch.
Stars
15
Forks
2
Language
Python
License
MIT
Category
Last pushed
Oct 02, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/mjalali/embedding-comparison"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cosmosgl/graph
GPU-accelerated force graph layout and rendering
Clay-foundation/model
The Clay Foundation Model - An open source AI model and interface for Earth
nomic-ai/nomic
Nomic Developer API SDK
omoindrot/tensorflow-triplet-loss
Implementation of triplet loss in TensorFlow
sashakolpakov/dire-jax
DImensionality REduction in JAX