AlekseyKorshuk/optimum-transformers
Accelerated NLP pipelines for fast inference on CPU and GPU. Built with Transformers, Optimum and ONNX Runtime.
This project helps data scientists and ML engineers get faster results from their Natural Language Processing (NLP) models. You provide text and specify an NLP task (like sentiment analysis or question answering), and it quickly gives you the analyzed output. It's designed for anyone deploying or running NLP models who needs them to perform quicker.
126 stars. No commits in the last 6 months. Available on PyPI.
Use this if you need to run common NLP tasks like sentiment analysis, named entity recognition, or question answering on large volumes of text data and want significantly faster processing speeds.
Not ideal if you are developing new deep learning models from scratch or primarily working within a Colab notebook, as performance benefits may not be apparent there.
Stars
126
Forks
8
Language
Python
License
GPL-3.0
Category
Last pushed
Apr 06, 2022
Commits (30d)
0
Dependencies
11
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AlekseyKorshuk/optimum-transformers"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
openvinotoolkit/nncf
Neural Network Compression Framework for enhanced OpenVINO™ inference
huggingface/optimum
🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers...
NVIDIA/Megatron-LM
Ongoing research training transformer models at scale
huggingface/optimum-intel
🤗 Optimum Intel: Accelerate inference with Intel optimization tools
eole-nlp/eole
Open language modeling toolkit based on PyTorch