semvec/embedstresstest
Stress Testing Embedding Models
This tool helps evaluate how well AI models truly understand the meaning of text, rather than just recognizing similar words. You provide a list of text examples (like software component descriptions) and the tool gives you an accuracy score, showing whether the model grasps the deeper meaning. It's designed for anyone working with AI models that process and interpret text, such as AI product managers or data scientists evaluating model performance.
No commits in the last 6 months.
Use this if you need to reliably assess if your text-embedding AI model understands the true semantic meaning of descriptions, rather than being fooled by similar-sounding but different concepts.
Not ideal if you're looking for an absolute similarity score between texts, as this benchmark focuses on relative semantic understanding rather than raw numerical comparisons.
Stars
11
Forks
2
Language
Python
License
—
Category
Last pushed
Oct 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/semvec/embedstresstest"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
embeddings-benchmark/mteb
MTEB: Massive Text Embedding Benchmark
harmonydata/harmony
The Harmony Python library: a research tool for psychologists to harmonise data and...
yannvgn/laserembeddings
LASER multilingual sentence embeddings as a pip package
embeddings-benchmark/results
Data for the MTEB leaderboard
Hironsan/awesome-embedding-models
A curated list of awesome embedding models tutorials, projects and communities.