indri-voice/vit.triton

VIT inference in triton because, why not?

23
/ 100
Experimental

This project offers a specialized implementation of the Vision Transformer (ViT) model, built using Triton kernels for deep learning researchers and GPU programmers. It allows you to feed image data into a ViT model and get the processed features out, providing a functional and educational resource for understanding GPU-optimized model architectures. The primary users are those looking to learn advanced GPU programming or integrate highly optimized custom models into their deep learning pipelines.

No commits in the last 6 months.

Use this if you are a deep learning engineer or researcher interested in understanding and implementing highly optimized neural network components using GPU kernels.

Not ideal if you are looking for a plug-and-play machine learning library for immediate application development without diving into GPU programming details.

GPU programming deep learning optimization vision transformers AI model deployment high-performance computing
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 8 / 25

How are scores calculated?

Stars

36

Forks

3

Language

Python

License

Last pushed

May 31, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/indri-voice/vit.triton"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.