semvec/embedstresstest

Stress Testing Embedding Models

26
/ 100
Experimental

This tool helps evaluate how well AI models truly understand the meaning of text, rather than just recognizing similar words. You provide a list of text examples (like software component descriptions) and the tool gives you an accuracy score, showing whether the model grasps the deeper meaning. It's designed for anyone working with AI models that process and interpret text, such as AI product managers or data scientists evaluating model performance.

No commits in the last 6 months.

Use this if you need to reliably assess if your text-embedding AI model understands the true semantic meaning of descriptions, rather than being fooled by similar-sounding but different concepts.

Not ideal if you're looking for an absolute similarity score between texts, as this benchmark focuses on relative semantic understanding rather than raw numerical comparisons.

AI model evaluation natural language processing text embeddings semantic search model quality assurance
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 7 / 25
Community 12 / 25

How are scores calculated?

Stars

11

Forks

2

Language

Python

License

Last pushed

Oct 14, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/semvec/embedstresstest"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.