DataScienceUIBK/HintEval
HintEvalš”: A Comprehensive Framework for Hint Generation and Evaluation for Questions
This project helps educators, content creators, or researchers develop and evaluate hints for questions without directly giving away the answer. You provide a question and its correct answer, and the tool can generate multiple subtle clues. It also lets you assess how effective these hints are, measuring aspects like clarity, relevance, and whether they accidentally reveal too much.
Available on PyPI.
Use this if you need to create and rigorously test multiple variations of hints for quizzes, educational content, or interactive systems, ensuring they are helpful without being direct.
Not ideal if you are looking for a tool that directly provides answers to questions or generates full explanations rather than subtle clues.
Stars
36
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 26, 2026
Commits (30d)
0
Dependencies
23
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/DataScienceUIBK/HintEval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
google/langfun
OO for LLMs
tanaos/artifex
Small Language Model Inference, Fine-Tuning and Observability. No GPU, no labeled data needed.
preligens-lab/textnoisr
Adding random noise to a text dataset, and controlling very accurately the quality of the result
vulnerability-lookup/VulnTrain
A tool to generate datasets and models based on vulnerabilities descriptions from @Vulnerability-Lookup.
masakhane-io/masakhane-mt
Machine Translation for Africa