huggingface/optimum-intel
🤗 Optimum Intel: Accelerate inference with Intel optimization tools
This is a tool for developers working with AI models on Intel hardware. It helps take large language models (LLMs) or other deep learning models from libraries like Transformers or Diffusers, optimize them using Intel's OpenVINO toolkit, and prepare them for faster deployment. Developers use it to make their AI applications run more efficiently on Intel CPUs, GPUs, and other accelerators.
548 stars. Actively maintained with 14 commits in the last 30 days.
Use this if you are a developer looking to accelerate the performance of your AI models, especially large language models and other deep learning models, when running them on Intel hardware.
Not ideal if you are an end-user without programming knowledge, if you are not working with deep learning models, or if your deployment environment does not use Intel processors or accelerators.
Stars
548
Forks
205
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Mar 13, 2026
Commits (30d)
14
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/huggingface/optimum-intel"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related models
openvinotoolkit/nncf
Neural Network Compression Framework for enhanced OpenVINOâ„¢ inference
huggingface/optimum
🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers...
NVIDIA/Megatron-LM
Ongoing research training transformer models at scale
eole-nlp/eole
Open language modeling toolkit based on PyTorch
huggingface/optimum-habana
Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)