CodingPlatelets/transformer_MM

Accelerator for LLM Based on Chisel3

33
/ 100
Emerging

This project offers hardware designs for accelerating large language model (LLM) computations. It takes in raw LLM data for matrix operations and attention mechanisms and outputs faster processed results, enabling quicker training and inference. Specialized hardware engineers and researchers working on custom AI accelerators would use this to build more efficient LLM chips.

Use this if you are designing custom hardware (like an ASIC or FPGA) for large language models and need highly optimized arithmetic units and memory controllers.

Not ideal if you are a software developer looking for a Python library or an end-user running LLMs on standard GPUs or CPUs.

AI-accelerator-design LLM-hardware-engineering neural-network-computation custom-chip-design hardware-optimization
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

12

Forks

1

Language

Scala

License

LGPL-3.0

Last pushed

Dec 15, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/CodingPlatelets/transformer_MM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.