CTO92/PyFlame

A Python deep learning framework with lazy evaluation, automatic differentiation, and a PyTorch-like API. Features include neural network modules, data loading, training utilities, model serving, and integrations with MLflow, W&B, ONNX, and Jupyter.

40
/ 100
Emerging

This is a specialized deep learning framework for organizations building or porting AI models that need to run on Cerebras Wafer-Scale Engine (WSE) hardware. It takes your neural network models, defined using a familiar PyTorch-like interface, and compiles them into highly optimized code for the WSE. This tool is designed for AI infrastructure engineers and machine learning practitioners focused on maximizing performance on specialized AI accelerators.

Use this if you are developing high-performance AI models specifically for deployment on Cerebras WSE hardware and need a framework designed for that architecture.

Not ideal if you are working with standard GPU-based deep learning or if your organization doesn't have access to Cerebras WSE hardware and its proprietary SDK.

AI-accelerator-development deep-learning-engineering hardware-aware-ML custom-AI-tooling
No Package No Dependents
Maintenance 10 / 25
Adoption 9 / 25
Maturity 13 / 25
Community 8 / 25

How are scores calculated?

Stars

76

Forks

5

Language

Python

License

Apache-2.0

Last pushed

Jan 29, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/CTO92/PyFlame"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.