rajaswa/indic-syntax-evaluation

Vyākarana: A Colorless Green Benchmark for Syntactic Evaluation in Indic Languages

32
/ 100
Emerging

This project offers a specialized benchmark dataset, "Vyākarana," to evaluate how well large language models understand the grammatical structure (syntax) of Indic languages like Hindi and Tamil. It takes various language models as input and produces detailed scores on their ability to perform tasks like identifying parts of speech, understanding grammatical case, and subject-verb agreement. Researchers and developers working on natural language processing for Indic languages would use this to gauge and improve their models' syntactic comprehension.

No commits in the last 6 months.

Use this if you are a researcher or developer who needs to rigorously test the syntactic capabilities of multilingual or Indic language models.

Not ideal if you are looking for an off-the-shelf tool for general text analysis or translation in Indic languages, rather than model evaluation.

Indic Languages Natural Language Processing Language Model Evaluation Computational Linguistics Syntactic Analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

15

Forks

2

Language

Jupyter Notebook

License

MIT

Last pushed

Feb 28, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/rajaswa/indic-syntax-evaluation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.