deepmancer/roberta-adapter-fine-tuning

Fine-tuning a RoBERTa model for sentiment analysis on the IMDB movie reviews dataset using the Adapter method and PyTorch Transformers

12
/ 100
Experimental

This project helps data scientists efficiently fine-tune large language models for specific tasks. It takes raw text data, like movie reviews, and outputs sentiment classifications (positive or negative) without needing to retrain the entire model. Anyone working on natural language processing tasks who needs to adapt powerful models to new datasets can use this.

No commits in the last 6 months.

Use this if you need to perform sentiment analysis or other text classification on your own datasets using powerful pre-trained models, but want to minimize computational resources and training time.

Not ideal if you need to build a sentiment analysis model from scratch without leveraging existing large language models or if your dataset is extremely small.

sentiment-analysis text-classification natural-language-processing machine-learning data-science
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

7

Forks

Language

Jupyter Notebook

License

Last pushed

Feb 07, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/deepmancer/roberta-adapter-fine-tuning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.