Bruce-Lee-LY/cutlass_gemm
Multiple GEMM operators are constructed with cutlass to support LLM inference.
This project helps optimize the core mathematical operations (matrix multiplications) essential for running large language models (LLMs). It takes in high-precision numerical data for matrices A, B, C, and D, and outputs the result of their multiplication and addition with improved computational efficiency. This is for AI/ML engineers and researchers who are deploying and optimizing LLMs.
No commits in the last 6 months.
Use this if you are an AI/ML engineer or researcher looking to speed up the inference performance of large language models by optimizing matrix multiplication operations on NVIDIA GPUs.
Not ideal if you are not working directly with the low-level optimization of LLM inference or do not have access to NVIDIA GPUs.
Stars
19
Forks
2
Language
C++
License
BSD-3-Clause
Category
Last pushed
Aug 03, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Bruce-Lee-LY/cutlass_gemm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
quic/efficient-transformers
This library empowers users to seamlessly port pretrained models and checkpoints on the...
ManuelSLemos/RabbitLLM
Run 70B+ LLMs on a single 4GB GPU — no quantization required.
alpa-projects/alpa
Training and serving large-scale neural networks with auto parallelization.
arm-education/Advanced-AI-Hardware-Software-Co-Design
Hands-on course materials for ML engineers to master extreme model quantization and on-device...
IST-DASLab/marlin
FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes...