ritesh-modi/embedding-hallucinations
This repo shows how foundational model hallucinates and how we can fix such hallucinations using fine-tuning them
When working with natural language processing, this project helps you assess and improve how well your AI models understand human language. It takes your text data, processes it, and provides refined AI models that more accurately interpret meaning, reducing instances where the model 'hallucinates' or misunderstands context. This is for AI/ML practitioners or data scientists building or deploying language-based AI systems.
No commits in the last 6 months.
Use this if you are developing AI applications that rely on understanding text, and you need to ensure your models accurately capture semantic meaning and avoid misinterpretations.
Not ideal if you are looking for a plug-and-play solution for general text similarity without needing to fine-tune models or deeply analyze embedding performance.
Stars
9
Forks
8
Language
Python
License
MIT
Category
Last pushed
Apr 19, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/ritesh-modi/embedding-hallucinations"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ContextualAI/gritlm
Generative Representational Instruction Tuning
xlang-ai/instructor-embedding
[ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings
liuqidong07/LLMEmb
[AAAI'25 Oral] The official implementation code of LLMEmb
hpcaitech/CachedEmbedding
A memory efficient DLRM training solution using ColossalAI
ritesh-modi/fine-tuning-embeddings-template
This repo is a template to fine-tune embedding models using sentencetransformers based on...