RussianNLP/TAPE
TAPE benchmark
This project helps researchers and developers evaluate how well their Russian language models understand text, especially in challenging or unexpected situations. It takes a Russian text understanding model and a dataset of Russian text, then outputs detailed performance reports showing how the model handles different linguistic variations and specific subsets of data. Scientists, NLU researchers, and data scientists working with Russian language AI can use this to rigorously test their models.
No commits in the last 6 months.
Use this if you need to systematically evaluate the robustness and nuanced performance of your Russian Natural Language Understanding (NLU) models, particularly for few-shot learning scenarios.
Not ideal if you are working with languages other than Russian or if your primary goal is general NLU model training rather than specialized evaluation.
Stars
8
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 27, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/RussianNLP/TAPE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
giacbrd/ShallowLearn
An experiment about re-implementing supervised learning models based on shallow neural network...
javedsha/text-classification
Machine Learning and NLP: Text Classification using python, scikit-learn and NLTK
Wluper/edm
Python package for understanding the difficulty of text classification datasets. (in CoNNL 2018)
fendouai/Awesome-Text-Classification
Awesome-Text-Classification Projects,Papers,Tutorial .
chicago-justice-project/article-tagging
Natural Language Processing of Chicago news articles