Bruce-Lee-LY/cutlass_gemm

Multiple GEMM operators are constructed with cutlass to support LLM inference.

32
/ 100
Emerging

This project helps optimize the core mathematical operations (matrix multiplications) essential for running large language models (LLMs). It takes in high-precision numerical data for matrices A, B, C, and D, and outputs the result of their multiplication and addition with improved computational efficiency. This is for AI/ML engineers and researchers who are deploying and optimizing LLMs.

No commits in the last 6 months.

Use this if you are an AI/ML engineer or researcher looking to speed up the inference performance of large language models by optimizing matrix multiplication operations on NVIDIA GPUs.

Not ideal if you are not working directly with the low-level optimization of LLM inference or do not have access to NVIDIA GPUs.

LLM-inference AI-model-optimization GPU-acceleration deep-learning-deployment
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

19

Forks

2

Language

C++

License

BSD-3-Clause

Last pushed

Aug 03, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Bruce-Lee-LY/cutlass_gemm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.