kreasof-ai/OpenFormer

A hackable library for running and fine-tuning modern transformer models on commodity and alternative GPUs, powered by tinygrad.

39
/ 100
Emerging

This is a library designed for machine learning engineers and researchers who want to train and run large language models (LLMs) without needing NVIDIA GPUs. It takes publicly available transformer model architectures and allows you to fine-tune them with your own data or use them for text generation on a variety of consumer-grade GPUs, including AMD, Intel, and Apple Silicon. The output is a specialized LLM model or generated text.

Use this if you are a machine learning engineer or researcher looking to experiment with, train, or fine-tune large language models efficiently on non-NVIDIA GPUs, especially for tasks requiring custom training rather than rapid text generation.

Not ideal if your primary need is ultra-fast, real-time text generation (inference) with large language models, as its current inference speed is slower compared to PyTorch-based implementations.

large-language-models machine-learning-engineering model-fine-tuning AI-research GPU-optimization
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 7 / 25

How are scores calculated?

Stars

28

Forks

2

Language

Python

License

Apache-2.0

Last pushed

Feb 10, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kreasof-ai/OpenFormer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.