huggingface/optimum-intel

🤗 Optimum Intel: Accelerate inference with Intel optimization tools

68
/ 100
Established

This is a tool for developers working with AI models on Intel hardware. It helps take large language models (LLMs) or other deep learning models from libraries like Transformers or Diffusers, optimize them using Intel's OpenVINO toolkit, and prepare them for faster deployment. Developers use it to make their AI applications run more efficiently on Intel CPUs, GPUs, and other accelerators.

548 stars. Actively maintained with 14 commits in the last 30 days.

Use this if you are a developer looking to accelerate the performance of your AI models, especially large language models and other deep learning models, when running them on Intel hardware.

Not ideal if you are an end-user without programming knowledge, if you are not working with deep learning models, or if your deployment environment does not use Intel processors or accelerators.

AI-model-deployment deep-learning-optimization machine-learning-engineering model-quantization edge-AI
No Package No Dependents
Maintenance 17 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

548

Forks

205

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Mar 13, 2026

Commits (30d)

14

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/huggingface/optimum-intel"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.