davidheineman/thresh
🌾 Universal, customizable and deployable fine-grained evaluation for text generation.
This tool helps researchers and annotators create and manage detailed feedback for text generation projects. You input source text and generated text, then use a customizable interface to highlight specific parts and answer structured questions about them. The output is fine-grained annotation data, useful for evaluating and improving text generation models. It's designed for anyone who needs to systematically assess the quality of AI-generated text.
No commits in the last 6 months.
Use this if you need to thoroughly analyze and categorize specific issues or qualities within AI-generated text, going beyond simple scores to understand 'why' something is good or bad.
Not ideal if you only need a quick, high-level quality score or if your annotation task doesn't involve detailed span selection and recursive questioning on text.
Stars
24
Forks
5
Language
Vue
License
Apache-2.0
Category
Last pushed
Oct 26, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/davidheineman/thresh"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
philenius/ngx-annotate-text
This Angular component library is perfect for tasks like visualizing named entity recognition,...
davidjurgens/potato
potato: the portable annotation tool
jiesutd/YEDDA
YEDDA: A Lightweight Collaborative Text Span Annotation Tool. Code for ACL 2018 Best Demo Paper...
synyi/poplar
A web-based annotation tool for natural language processing (NLP)
webanno/webanno
🆕 Work continues on INCEpTION 👉 https://github.com/inception-project/inception 👈 -- ⚠️ The...