tvm-cn and triton-cn
TVM is a compiler framework for optimizing ML models across heterogeneous hardware, while Triton is an inference serving platform that deploys those optimized models—making them complementary tools used together in an ML deployment pipeline.
About tvm-cn
hyperai/tvm-cn
TVM Documentation in Chinese Simplified / TVM 中文文档
This project offers the official Apache TVM documentation translated into Simplified Chinese. It helps machine learning engineers understand how to optimize and run deep learning computations efficiently across various hardware like CPUs, GPUs, and ARM processors. It takes scattered English TVM resources and outputs a centralized, systematic learning guide in Chinese for those building AI applications.
About triton-cn
hyperai/triton-cn
Triton Documentation in Chinese Simplified / Triton 中文文档
This provides the official documentation for Triton, a programming language and compiler for deep neural network (DNN) computation kernels, translated into Simplified Chinese. It takes the original English Triton documentation and outputs a comprehensive, easy-to-understand Chinese version. The primary users are Chinese-speaking deep learning developers and researchers who need to efficiently write and run custom DNN kernels on modern GPU hardware.
Scores updated daily from GitHub, PyPI, and npm data. How scores work