indri-voice/vit.triton
VIT inference in triton because, why not?
This project offers a specialized implementation of the Vision Transformer (ViT) model, built using Triton kernels for deep learning researchers and GPU programmers. It allows you to feed image data into a ViT model and get the processed features out, providing a functional and educational resource for understanding GPU-optimized model architectures. The primary users are those looking to learn advanced GPU programming or integrate highly optimized custom models into their deep learning pipelines.
No commits in the last 6 months.
Use this if you are a deep learning engineer or researcher interested in understanding and implementing highly optimized neural network components using GPU kernels.
Not ideal if you are looking for a plug-and-play machine learning library for immediate application development without diving into GPU programming details.
Stars
36
Forks
3
Language
Python
License
—
Category
Last pushed
May 31, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/indri-voice/vit.triton"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
triton-inference-server/server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
gpu-mode/Triton-Puzzles
Puzzles for learning Triton
hailo-ai/hailo_model_zoo
The Hailo Model Zoo includes pre-trained models and a full building and evaluation environment
open-mmlab/mmdeploy
OpenMMLab Model Deployment Framework
hyperai/tvm-cn
TVM Documentation in Chinese Simplified / TVM 中文文档