AlekseyKorshuk/optimum-transformers

Accelerated NLP pipelines for fast inference on CPU and GPU. Built with Transformers, Optimum and ONNX Runtime.

45
/ 100
Emerging

This project helps data scientists and ML engineers get faster results from their Natural Language Processing (NLP) models. You provide text and specify an NLP task (like sentiment analysis or question answering), and it quickly gives you the analyzed output. It's designed for anyone deploying or running NLP models who needs them to perform quicker.

126 stars. No commits in the last 6 months. Available on PyPI.

Use this if you need to run common NLP tasks like sentiment analysis, named entity recognition, or question answering on large volumes of text data and want significantly faster processing speeds.

Not ideal if you are developing new deep learning models from scratch or primarily working within a Colab notebook, as performance benefits may not be apparent there.

Natural Language Processing Text Analytics Machine Learning Deployment Data Science AI Inference
Stale 6m
Maintenance 0 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 10 / 25

How are scores calculated?

Stars

126

Forks

8

Language

Python

License

GPL-3.0

Last pushed

Apr 06, 2022

Commits (30d)

0

Dependencies

11

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AlekseyKorshuk/optimum-transformers"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.