davidheineman/thresh

🌾 Universal, customizable and deployable fine-grained evaluation for text generation.

37
/ 100
Emerging

This tool helps researchers and annotators create and manage detailed feedback for text generation projects. You input source text and generated text, then use a customizable interface to highlight specific parts and answer structured questions about them. The output is fine-grained annotation data, useful for evaluating and improving text generation models. It's designed for anyone who needs to systematically assess the quality of AI-generated text.

No commits in the last 6 months.

Use this if you need to thoroughly analyze and categorize specific issues or qualities within AI-generated text, going beyond simple scores to understand 'why' something is good or bad.

Not ideal if you only need a quick, high-level quality score or if your annotation task doesn't involve detailed span selection and recursive questioning on text.

text-generation-evaluation NLP-research data-annotation human-in-the-loop content-quality-analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

24

Forks

5

Language

Vue

License

Apache-2.0

Last pushed

Oct 26, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/davidheineman/thresh"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.