IhabBendidi/sentiment_embeddings

A scientific benchmark and comparison of the performance of sentiment analysis models in NLP on small to medium datasets

36
/ 100
Emerging

This project helps data scientists and NLP researchers compare how well different sentiment analysis models perform on smaller datasets. It takes raw text data, processes it, and then evaluates models like BERT, LSTM, and TextBlob to show their accuracy in classifying sentiment. The output provides insights into which models are most effective for specific sentiment analysis tasks.

No commits in the last 6 months.

Use this if you need to choose the best sentiment analysis model for your text data and want to see a clear comparison of various models' performance.

Not ideal if you're looking for a production-ready sentiment analysis API or a tool to analyze extremely large, streaming datasets.

natural-language-processing data-science-research sentiment-analysis model-benchmarking text-classification
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

13

Forks

4

Language

Jupyter Notebook

License

MIT

Last pushed

Dec 14, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/IhabBendidi/sentiment_embeddings"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.