Tiramisu-Compiler/tiramisu
A polyhedral compiler for expressing fast and portable data parallel algorithms
This project helps high-performance computing developers write algorithms for linear algebra, deep learning, image processing, and machine learning. You input C++ code that describes a data-parallel algorithm and how it should be optimized, and Tiramisu outputs highly optimized, portable code. This is for developers building high-performance applications who need to target various hardware like CPUs, GPUs, FPGAs, and distributed systems.
957 stars. No commits in the last 6 months.
Use this if you are a developer looking for a way to express data-parallel algorithms once and generate highly optimized, platform-specific code for multiple hardware architectures.
Not ideal if you are an end-user looking for a ready-to-use application, or if you are not comfortable writing C++ code and dealing with compiler-level optimizations.
Stars
957
Forks
137
Language
C++
License
MIT
Category
Last pushed
Nov 20, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Tiramisu-Compiler/tiramisu"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
apache/tvm
Open Machine Learning Compiler Framework
uxlfoundation/oneDNN
oneAPI Deep Neural Network Library (oneDNN)
Tencent/ncnn
ncnn is a high-performance neural network inference framework optimized for the mobile platform
OpenMined/TenSEAL
A library for doing homomorphic encryption operations on tensors
iree-org/iree-turbine
IREE's PyTorch Frontend, based on Torch Dynamo.