plasticityai/magnitude
A fast, efficient universal vector embedding utility package.
This project helps machine learning practitioners efficiently work with vector embeddings, which are numerical representations of words, phrases, or other data points. It takes pre-trained embedding models as input and allows for fast retrieval and processing of these vectors. Data scientists, machine learning engineers, and NLP researchers can use this to quickly integrate embeddings into their models.
1,656 stars. No commits in the last 6 months.
Use this if you are building machine learning models, especially those involving natural language, and need a fast, memory-efficient way to use large pre-trained vector embedding models.
Not ideal if you are not working with vector embeddings or building machine learning models, as this is a developer-focused tool.
Stars
1,656
Forks
121
Language
Python
License
MIT
Category
Last pushed
Aug 03, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/plasticityai/magnitude"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
embeddings-benchmark/mteb
MTEB: Massive Text Embedding Benchmark
harmonydata/harmony
The Harmony Python library: a research tool for psychologists to harmonise data and...
yannvgn/laserembeddings
LASER multilingual sentence embeddings as a pip package
embeddings-benchmark/results
Data for the MTEB leaderboard
Hironsan/awesome-embedding-models
A curated list of awesome embedding models tutorials, projects and communities.