optimum and optimum-transformers

B builds specialized NLP inference pipelines on top of A's optimization framework, making them complements rather than competitors—B leverages Optimum's hardware optimization tools as a dependency to deliver pre-built use cases.

optimum
77
Verified
optimum-transformers
45
Emerging
Maintenance 13/25
Adoption 15/25
Maturity 25/25
Community 24/25
Maintenance 0/25
Adoption 10/25
Maturity 25/25
Community 10/25
Stars: 3,325
Forks: 624
Downloads:
Commits (30d): 1
Language: Python
License: Apache-2.0
Stars: 126
Forks: 8
Downloads:
Commits (30d): 0
Language: Python
License: GPL-3.0
No risk flags
Stale 6m

About optimum

huggingface/optimum

🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization tools

This tool helps machine learning engineers and researchers accelerate the performance of their large language, image, and sentence models. It takes your existing AI models built with popular frameworks like Hugging Face Transformers or Diffusers and optimizes them for faster execution and training on specialized hardware. The output is a more efficient model that runs quicker and uses fewer resources.

AI model optimization machine learning deployment deep learning training natural language processing computer vision

About optimum-transformers

AlekseyKorshuk/optimum-transformers

Accelerated NLP pipelines for fast inference on CPU and GPU. Built with Transformers, Optimum and ONNX Runtime.

This project helps data scientists and ML engineers get faster results from their Natural Language Processing (NLP) models. You provide text and specify an NLP task (like sentiment analysis or question answering), and it quickly gives you the analyzed output. It's designed for anyone deploying or running NLP models who needs them to perform quicker.

Natural Language Processing Text Analytics Machine Learning Deployment Data Science AI Inference

Scores updated daily from GitHub, PyPI, and npm data. How scores work