toyaix/TritonLLM

LLM Inference via Triton (Flexible & Modular): Focused on Kernel Optimization using CUBIN binaries, Starting from gpt-oss Model

38
/ 100
Emerging

This project helps AI developers and researchers significantly speed up large language model (LLM) inference. It takes a pre-trained LLM, specifically gpt-oss models, and optimizes how quickly it generates text. The output is faster text generation, especially when running multiple queries at once, making it ideal for those building or deploying LLM-powered applications.

Use this if you are a developer or researcher looking to optimize the performance and reduce latency of your LLM applications, particularly when deploying gpt-oss models on NVIDIA GPUs for high-throughput scenarios.

Not ideal if you are an end-user without programming experience, or if you are working with non-gpt-oss models or hardware other than NVIDIA GPUs.

LLM deployment AI model optimization GPU acceleration Deep learning inference Natural Language Processing
No Package No Dependents
Maintenance 10 / 25
Adoption 9 / 25
Maturity 15 / 25
Community 4 / 25

How are scores calculated?

Stars

76

Forks

2

Language

Python

License

MIT

Last pushed

Mar 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/toyaix/TritonLLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.