optimum-habana and optimum-graphcore

These are ecosystem siblings—both are specialized hardware acceleration libraries that extend Hugging Face Transformers to different proprietary AI accelerators (Habana Gaudi HPUs vs. Graphcore IPUs), following the same `optimum-*` pattern for their respective platforms.

optimum-habana
61
Established
optimum-graphcore
46
Emerging
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 0/25
Adoption 9/25
Maturity 16/25
Community 21/25
Stars: 207
Forks: 270
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 87
Forks: 33
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No Package No Dependents
Stale 6m No Package No Dependents

About optimum-habana

huggingface/optimum-habana

Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)

This project helps machine learning engineers accelerate the training and inference of large language models and diffusion models, such as those from the Hugging Face Transformers and Diffusers libraries. It takes existing model code and configuration and outputs significantly faster computations by leveraging Intel Gaudi AI Accelerators. This is for machine learning practitioners and researchers working with large-scale models who need to optimize performance on specific hardware.

AI-accelerators large-language-models image-generation deep-learning-optimization ML-infrastructure

About optimum-graphcore

huggingface/optimum-graphcore

Blazing fast training of 🤗 Transformers on Graphcore IPUs

This project helps machine learning engineers or researchers accelerate the training and fine-tuning of large language models and other AI models. It provides tools to efficiently run popular Hugging Face Transformers models on Graphcore Intelligence Processing Units (IPUs), which are specialized AI processors. You input your existing Transformer models and datasets, and it outputs faster-trained or fine-tuned models ready for deployment.

deep-learning natural-language-processing computer-vision model-training AI-acceleration

Scores updated daily from GitHub, PyPI, and npm data. How scores work