oumi-ai/halloumi-demo
Try out HallOumi, a state-of-the-art claim verification model in a simple UI!
This project helps you quickly check if claims within any text, whether written by a person or an AI, are accurate and supported by evidence. You provide the text containing claims and a context for verification, and it highlights each sentence, explaining whether it's true, false, or unverified, with citations. Content creators, fact-checkers, and quality assurance specialists can use this tool to ensure accuracy.
No commits in the last 6 months.
Use this if you need to quickly verify the accuracy of individual sentences in reports, articles, or AI-generated content against provided source material.
Not ideal if you need to verify claims without any provided context or if you are looking for a tool to generate new factual content.
Stars
42
Forks
2
Language
TypeScript
License
Apache-2.0
Category
Last pushed
Apr 02, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/oumi-ai/halloumi-demo"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vectara/hallucination-leaderboard
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
PKU-YuanGroup/Hallucination-Attack
Attack to induce LLMs within hallucinations
amir-hameed-mir/Sirraya_LSD_Code
Layer-wise Semantic Dynamics (LSD) is a model-agnostic framework for hallucination detection in...
NishilBalar/Awesome-LVLM-Hallucination
up-to-date curated list of state-of-the-art Large vision language models hallucinations...
intuit/sac3
Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via...