ufal/factgenie
Lightweight self-hosted span annotation tool
This tool helps researchers and content quality specialists identify and correct errors in text generated by Large Language Models (LLMs) or human writers. You input text outputs from these sources, and it provides a web-based interface for annotating specific errors (semantic, factual, lexical) within those texts. The primary users are researchers evaluating LLM performance or quality assurance teams checking generated content.
Available on PyPI.
Use this if you need a self-hosted platform to systematically annotate errors in generated text, either through human crowdworkers or by leveraging other LLMs for evaluation.
Not ideal if you need help with generating the initial text outputs, launching a crowdsourcing campaign (like on Prolific), or running your LLM evaluators.
Stars
39
Forks
8
Language
Python
License
MIT
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Dependencies
24
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/ufal/factgenie"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
duanyu/LabelFast
中文世界的NLP自动标注开源工具,简单样本,交给LabelFast。
onesuper/HuggingFace-Datasets-Text-Quality-Analysis
Retrieves parquet files from Hugging Face, identifies and quantifies junky data, duplication,...
gabyarte/event-extraction-small-corpus
Event extraction on legal domain´s small corpus using Large Language Models. Specifically, on...