csinva/interpretable-embeddings
Interpretable text embeddings by asking LLMs yes/no questions (NeurIPS 2024)
This tool helps you understand what complex text data is really about by converting it into a clear, yes/no profile. You input text examples and a list of specific yes/no questions relevant to your domain. The output is a simple table showing whether each question applies to each piece of text. Marketers, researchers, or anyone analyzing text content can use this to get actionable insights.
No commits in the last 6 months.
Use this if you need to quickly and transparently understand the key characteristics or themes present within a collection of text documents.
Not ideal if you need to generate numerical representations of text for machine learning models without a focus on human interpretability.
Stars
46
Forks
2
Language
Python
License
—
Category
Last pushed
Nov 15, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/csinva/interpretable-embeddings"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ContextualAI/gritlm
Generative Representational Instruction Tuning
xlang-ai/instructor-embedding
[ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings
liuqidong07/LLMEmb
[AAAI'25 Oral] The official implementation code of LLMEmb
hpcaitech/CachedEmbedding
A memory efficient DLRM training solution using ColossalAI
ritesh-modi/embedding-hallucinations
This repo shows how foundational model hallucinates and how we can fix such hallucinations using...