ni9elf/automated-scoring

Official code for "Automated Scoring for Reading Comprehension via In-context BERT Tuning" (AIED 2022)

27
/ 100
Experimental

This project offers an automated way to score student responses to reading comprehension questions. You provide student answer texts and optionally demographic data, and it outputs an estimated score, mimicking human raters. This is ideal for educators, assessment developers, or researchers who need to efficiently grade large volumes of open-ended text responses.

No commits in the last 6 months.

Use this if you need to automatically score short-answer text responses from students on reading comprehension tasks and have a dataset of previously human-scored examples.

Not ideal if you are looking for a general-purpose text analysis tool or need to score responses for subjects other than reading comprehension without extensive retraining.

educational-assessment reading-comprehension automated-grading student-evaluation text-scoring
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 14 / 25

How are scores calculated?

Stars

13

Forks

3

Language

Python

License

Last pushed

May 23, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ni9elf/automated-scoring"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.