charles9n/bert-sklearn

a sklearn wrapper for Google's BERT model

58
/ 100
Established

This tool helps data scientists and machine learning engineers streamline the process of fine-tuning large language models for text-based tasks. It takes raw text or text pairs and their corresponding labels as input, allowing you to train powerful models for classification, regression, or sequence labeling. The output is a trained model capable of making predictions on new text data, which can then be saved and reused.

301 stars. No commits in the last 6 months. Available on PyPI.

Use this if you need to quickly adapt advanced language models like BERT, SciBERT, or BioBERT for specific text understanding tasks without deep expertise in model architecture.

Not ideal if you are looking for a no-code solution or if your primary focus is on traditional machine learning models rather than deep learning for natural language.

text-classification named-entity-recognition sentiment-analysis natural-language-processing biomedical-text-mining
Stale 6m
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 23 / 25

How are scores calculated?

Stars

301

Forks

70

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Oct 26, 2022

Commits (30d)

0

Dependencies

7

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/charles9n/bert-sklearn"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.