ai-hub-models and ai-reference-models
About ai-hub-models
qualcomm/ai-hub-models
Qualcomm® AI Hub Models is our collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) and ready to deploy on Qualcomm® devices.
This project provides pre-optimized machine learning models for computer vision tasks that run efficiently on Qualcomm-powered devices like smartphones, automotive platforms, and IoT hardware. It takes an existing model and optimizes it for specific Qualcomm chipsets and runtimes, producing a high-performance, ready-to-deploy model. This is for AI application developers and embedded systems engineers who want to integrate AI capabilities directly into edge devices.
About ai-reference-models
intel/ai-reference-models
Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Intel® Data Center GPUs
Provides curated sample scripts and tutorials for popular models (ResNet, BERT, Vision Transformer, etc.) across TensorFlow and PyTorch frameworks, with optimizations via Intel Extension plugins and support for multiple precision formats (Int8, BFloat16, FP32). Includes containerized environments and Jupyter notebooks for reproducible deployment, alongside best practices for leveraging Intel's upstream framework contributions. Supports both CPU inference/training on Xeon Scalable processors and GPU workloads on Intel Data Center GPUs.
Scores updated daily from GitHub, PyPI, and npm data. How scores work