Frikallo/axiom
High-performance C++ tensor library with NumPy/PyTorch-like API
This is a C++ library that helps developers write high-performance numerical computing code for tasks like machine learning, scientific simulations, or data analysis. It takes C++ code that needs to perform complex mathematical operations on large datasets (tensors) and produces highly optimized, fast-executing native applications. It's designed for C++ developers who are familiar with Python's NumPy or PyTorch libraries and need similar ease of use but with the raw speed of C++.
102 stars.
Use this if you are a C++ developer building applications that require high-speed tensor computations, especially on Apple Silicon where it offers zero-copy CPU-GPU memory transfers and Metal GPU acceleration.
Not ideal if you are not a C++ developer or if your existing numerical workloads are already optimized within Python environments like pure NumPy or PyTorch without a need for native C++ performance boosts.
Stars
102
Forks
2
Language
C++
License
MIT
Category
Last pushed
Mar 06, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Frikallo/axiom"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
iree-org/iree
A retargetable MLIR-based machine learning compiler and runtime toolkit.
brucefan1983/GPUMD
Graphics Processing Units Molecular Dynamics
uxlfoundation/oneDAL
oneAPI Data Analytics Library (oneDAL)
rapidsai/cuml
cuML - RAPIDS Machine Learning Library
NVIDIA/cutlass
CUDA Templates and Python DSLs for High-Performance Linear Algebra