ritesh-modi/embedding-hallucinations

This repo shows how foundational model hallucinates and how we can fix such hallucinations using fine-tuning them

39
/ 100
Emerging

When working with natural language processing, this project helps you assess and improve how well your AI models understand human language. It takes your text data, processes it, and provides refined AI models that more accurately interpret meaning, reducing instances where the model 'hallucinates' or misunderstands context. This is for AI/ML practitioners or data scientists building or deploying language-based AI systems.

No commits in the last 6 months.

Use this if you are developing AI applications that rely on understanding text, and you need to ensure your models accurately capture semantic meaning and avoid misinterpretations.

Not ideal if you are looking for a plug-and-play solution for general text similarity without needing to fine-tune models or deeply analyze embedding performance.

natural-language-processing ai-model-evaluation semantic-search text-understanding model-fine-tuning
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 15 / 25
Community 17 / 25

How are scores calculated?

Stars

9

Forks

8

Language

Python

License

MIT

Last pushed

Apr 19, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/ritesh-modi/embedding-hallucinations"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.