krisstallenberg/evaluating-annotations
This repository holds code to annotate textual data using LLMs, and calculate different measures of Inter-Annotator Agreement (IAA).
No commits in the last 6 months.
Stars
2
Forks
—
Language
Jupyter Notebook
License
GPL-3.0
Category
Last pushed
Apr 27, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/krisstallenberg/evaluating-annotations"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/NeMoEval
A Benchmark Tool for Natural Language-based Network Management
FudanSELab/ClassEval
Benchmark ClassEval for class-level code generation.
apartresearch/specificityplus
👩💻 Code for the ACL paper "Detecting Edit Failures in LLMs: An Improved Specificity Benchmark"
claws-lab/XLingEval
Code and Resources for the paper, "Better to Ask in English: Cross-Lingual Evaluation of Large...
HICAI-ZJU/SciKnowEval
SciKnowEval: Evaluating Multi-level Scientific Knowledge of Large Language Models