ymcui/NLP-Review-Scorer

Score your NLP paper review

22
/ 100
Experimental

This tool helps researchers who have submitted papers to scientific conferences or journals, specifically in Natural Language Processing, to quickly gauge the potential outcome of their submission. You provide the text of a peer review you've received, and it outputs a numerical score reflecting the paper's recommendation (e.g., from 1 to 5) and the reviewer's confidence. This is designed for academics and researchers awaiting or interpreting peer review feedback.

No commits in the last 6 months.

Use this if you are an NLP researcher and want a quick, unofficial way to estimate the recommendation score and reviewer confidence from a peer review text.

Not ideal if you need definitive scores for official rebuttal preparation, as this is a 'toy' model and not a substitute for carefully reading and responding to your reviews.

academic-publishing peer-review NLP-research paper-submission research-evaluation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

24

Forks

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Jul 17, 2019

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/ymcui/NLP-Review-Scorer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.