jha-lab/acceltran
[TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers
AccelTran helps hardware designers evaluate custom accelerator designs for transformer models. You provide descriptions of a transformer model and an accelerator's architecture, and it simulates how that accelerator performs, including its area, power consumption, and how efficiently it uses its different parts. This is for hardware architects and researchers who are designing specialized chips for AI workloads.
No commits in the last 6 months.
Use this if you are designing custom hardware accelerators for transformer-based AI models and need to simulate and evaluate their performance and resource usage.
Not ideal if you are a software developer looking for a library to train or deploy transformer models, as this tool focuses on hardware architecture simulation.
Stars
58
Forks
10
Language
Python
License
BSD-3-Clause
Category
Last pushed
Nov 22, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/jha-lab/acceltran"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
transformerlab/transformerlab-app
The open source research environment for AI researchers to seamlessly train, evaluate, and scale...
naru-project/naru
Neural Relation Understanding: neural cardinality estimators for tabular data
neurocard/neurocard
State-of-the-art neural cardinality estimators for join queries
danielzuegner/code-transformer
Implementation of the paper "Language-agnostic representation learning of source code from...
salesforce/CodeTF
CodeTF: One-stop Transformer Library for State-of-the-art Code LLM