georgeguimaraes/hallmark

Hallucination detection for Elixir, powered by Vectara's HHEM model

26
/ 100
Experimental

This tool helps developers working with Elixir ensure the text generated by Large Language Models (LLMs) remains faithful to its source. You provide the original text (premise) and the LLM-generated text (hypothesis), and it outputs a score from 0 (hallucinated) to 1 (consistent) or a simple 'consistent' / 'hallucinated' label. It's designed for developers building applications that use LLMs and need to guarantee content accuracy.

Use this if you are an Elixir developer building an application that uses an LLM and you need to automatically detect if the LLM's output deviates from the provided source material.

Not ideal if you need to check factual accuracy against real-world knowledge rather than consistency with a given premise, or if you are not working with Elixir.

LLM development Elixir programming content moderation text generation quality information integrity
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 11 / 25
Community 0 / 25

How are scores calculated?

Stars

13

Forks

Language

Elixir

License

MIT

Last pushed

Mar 04, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/georgeguimaraes/hallmark"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.