avacaondata/nlpboost

Python library for automatic training, optimization and comparison of Transformer models on most NLP tasks.

43
/ 100
Emerging

This tool helps NLP practitioners efficiently train and compare multiple Transformer models across various datasets. You provide your text data and desired models, and it automates the hyperparameter tuning and evaluation process. The output includes performance metrics and comparison plots, helping data scientists and NLP researchers quickly identify the best model for their specific natural language processing tasks.

No commits in the last 6 months. Available on PyPI.

Use this if you need to rapidly experiment with different Transformer models and datasets for NLP tasks like classification or named entity recognition, without spending extensive time on manual coding and configuration.

Not ideal if you prefer to hand-code every aspect of your model training and hyperparameter search, or if your NLP task falls outside common categories like classification, NER, or seq2seq.

natural-language-processing machine-learning-engineering text-classification model-comparison hyperparameter-tuning
Stale 6m
Maintenance 0 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 12 / 25

How are scores calculated?

Stars

20

Forks

3

Language

Python

License

MIT

Last pushed

May 06, 2023

Commits (30d)

0

Dependencies

31

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/avacaondata/nlpboost"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.