optimum and optimum-habana

Optimum-habana is a hardware-specific plugin for Optimum that enables Transformer training acceleration on Habana Gaudi processors, making them complements rather than competitors—you use optimum-habana alongside Optimum to target HPU hardware specifically.

optimum
77
Verified
optimum-habana
61
Established
Maintenance 13/25
Adoption 15/25
Maturity 25/25
Community 24/25
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 25/25
Stars: 3,325
Forks: 624
Downloads:
Commits (30d): 1
Language: Python
License: Apache-2.0
Stars: 207
Forks: 270
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No risk flags
No Package No Dependents

About optimum

huggingface/optimum

🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization tools

This tool helps machine learning engineers and researchers accelerate the performance of their large language, image, and sentence models. It takes your existing AI models built with popular frameworks like Hugging Face Transformers or Diffusers and optimizes them for faster execution and training on specialized hardware. The output is a more efficient model that runs quicker and uses fewer resources.

AI model optimization machine learning deployment deep learning training natural language processing computer vision

About optimum-habana

huggingface/optimum-habana

Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)

This project helps machine learning engineers accelerate the training and inference of large language models and diffusion models, such as those from the Hugging Face Transformers and Diffusers libraries. It takes existing model code and configuration and outputs significantly faster computations by leveraging Intel Gaudi AI Accelerators. This is for machine learning practitioners and researchers working with large-scale models who need to optimize performance on specific hardware.

AI-accelerators large-language-models image-generation deep-learning-optimization ML-infrastructure

Scores updated daily from GitHub, PyPI, and npm data. How scores work