fattorib/fusedswiglu
Fused SwiGLU Triton kernels
This project helps deep learning engineers accelerate the training and inference of transformer models by providing highly optimized SwiGLU operations. It takes in model weights and input data, then outputs the processed tensor, significantly speeding up computations. The primary users are machine learning practitioners and researchers working on large language models.
No commits in the last 6 months.
Use this if you are building or fine-tuning transformer-based models and need to optimize the performance of SwiGLU computations, especially on GPUs.
Not ideal if you are not working with transformer architectures or if your current deep learning framework already provides sufficient performance for SwiGLU operations.
Stars
12
Forks
3
Language
Python
License
MIT
Category
Last pushed
Jan 25, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/fattorib/fusedswiglu"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lucidrains/x-transformers
A concise but complete full-attention transformer with a set of promising experimental features...
kanishkamisra/minicons
Utility for behavioral and representational analyses of Language Models
lucidrains/simple-hierarchical-transformer
Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT
lucidrains/dreamer4
Implementation of Danijar's latest iteration for his Dreamer line of work
Nicolepcx/Transformers-in-Action
This is the corresponding code for the book Transformers in Action