MegEngine/MegCC
MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器
MegCC helps embedded systems developers deploy deep learning models efficiently. It takes a trained deep learning model and compiles it into highly optimized, tiny code tailored for specific hardware. The result is a much smaller, faster, and more memory-efficient model inference solution, ideal for resource-constrained devices like mobile phones, IoT devices, or specialized embedded systems.
484 stars. No commits in the last 6 months.
Use this if you need to deploy deep learning models on resource-limited hardware and require an extremely small runtime footprint, high performance, and low memory usage.
Not ideal if you are working with large-scale server-side deployments or cloud-based AI systems where hardware resource constraints are not a primary concern.
Stars
484
Forks
57
Language
C++
License
Apache-2.0
Category
Last pushed
Oct 23, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/MegEngine/MegCC"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
iree-org/iree
A retargetable MLIR-based machine learning compiler and runtime toolkit.
brucefan1983/GPUMD
Graphics Processing Units Molecular Dynamics
uxlfoundation/oneDAL
oneAPI Data Analytics Library (oneDAL)
rapidsai/cuml
cuML - RAPIDS Machine Learning Library
NVIDIA/cutlass
CUDA Templates and Python DSLs for High-Performance Linear Algebra