intel/intel-extension-for-pytorch
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
This tool helps AI engineers and machine learning practitioners accelerate the performance of their PyTorch models, especially large language models (LLMs), on Intel CPU and GPU hardware. By taking existing PyTorch code, it applies Intel-specific optimizations to make models run faster and more efficiently. The output is a more performant model without needing significant code changes.
2,018 stars. Used by 1 other package. Actively maintained with 1 commit in the last 30 days. Available on PyPI.
Use this if you need to optimize the speed and efficiency of your PyTorch-based AI models, particularly large language models, when running them on Intel CPUs or GPUs.
Not ideal if you are starting new AI development, as most of its features are now integrated directly into PyTorch, or if you are not using Intel hardware, as active development has ceased.
Stars
2,018
Forks
314
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 13, 2026
Commits (30d)
1
Dependencies
3
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/intel/intel-extension-for-pytorch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
pytorch/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
keras-team/keras
Deep Learning for humans
Lightning-AI/torchmetrics
Machine learning metrics for distributed, scalable PyTorch applications.
Lightning-AI/pytorch-lightning
Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
lanpa/tensorboardX
tensorboard for pytorch (and chainer, mxnet, numpy, ...)