jxmorris12/cde
code for training & evaluating Contextual Document Embedding models
This project offers a sophisticated method for converting large collections of text, like reports or articles, into numerical representations (embeddings) that capture their meaning. It takes your text documents as input and produces highly accurate numerical vectors, enabling better search and retrieval within your specific context. This is ideal for data scientists or machine learning engineers building search, recommendation, or information retrieval systems.
202 stars. No commits in the last 6 months.
Use this if you need to create highly accurate and context-aware numerical embeddings for large datasets of text to power search or recommendation engines.
Not ideal if you just need a basic text embedding model without the need for contextual understanding, or if you prefer a pre-trained, off-the-shelf solution without custom corpus sampling.
Stars
202
Forks
11
Language
Python
License
MIT
Category
Last pushed
May 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/jxmorris12/cde"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MilaNLProc/contextualized-topic-models
A python package to run contextualized topic modeling. CTMs combine contextualized embeddings...
vinid/cade
Compass-aligned Distributional Embeddings. Align embeddings from different corpora
spcl/ncc
Neural Code Comprehension: A Learnable Representation of Code Semantics
criteo-research/CausE
Code for the Recsys 2018 paper entitled Causal Embeddings for Recommandation.
vintasoftware/entity-embed
PyTorch library for transforming entities like companies, products, etc. into vectors to support...