ills-montreal/emir

When is an Embedding Model More Promising than Another?, NeurIPS'24

13
/ 100
Experimental

This project helps researchers and practitioners compare different embedding models to determine which one is most suitable for their specific task. It takes in two embedding models and outputs a measure of how much useful information one model captures that the other doesn't, helping you choose the best model for your data. This is for machine learning researchers and practitioners who need to evaluate and select embedding models for tasks in areas like natural language processing or molecular modeling.

No commits in the last 6 months.

Use this if you are working with machine learning models and need to quantitatively assess whether one embedding model provides significantly more relevant information than another for your particular application.

Not ideal if you are looking for a tool to train new embedding models or improve the performance of a single embedding model rather than comparing two existing ones.

machine-learning-research embedding-model-evaluation natural-language-processing molecular-modeling model-comparison
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

13

Forks

Language

Jupyter Notebook

License

Last pushed

Nov 11, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/ills-montreal/emir"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.