SandAI-org/MagiCompiler

A plug-and-play compiler that delivers free-lunch optimizations for both inference and training.

49
/ 100
Emerging

This tool helps machine learning engineers and researchers accelerate the training and deployment of large AI models, particularly those based on Transformer architectures. By optimizing how these models use computational resources, it takes your existing model code and produces a significantly faster version, improving both training speed and inference performance for various applications.

234 stars.

Use this if you are working with large AI models and need to reduce their training time or speed up their response time during operation, especially for multi-modality tasks or memory-constrained environments.

Not ideal if you are working with small models or non-Transformer architectures, or if you do not have control over the underlying Python and PyTorch environment.

deep-learning-optimization large-language-models model-training ai-inference resource-management
No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 13 / 25
Community 13 / 25

How are scores calculated?

Stars

234

Forks

17

Language

Python

License

Apache-2.0

Last pushed

Mar 28, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/SandAI-org/MagiCompiler"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.