rentruewang/koila

Prevent PyTorch's `CUDA error: out of memory` in just 1 line of code.

61
/ 100
Established

This project helps machine learning practitioners build and scale their algorithms without needing deep expertise in the underlying technical complexities. It takes your problem definition and automatically selects optimal algorithms and models, producing white-box (explainable) models that can scale across different machines. Data scientists and ML engineers, especially those who want to focus on problem-solving rather than infrastructure, would use this.

1,829 stars. Actively maintained with 28 commits in the last 30 days.

Use this if you want to build machine learning models by defining what you want to achieve, rather than getting bogged down in choosing specific algorithms or managing resource constraints.

Not ideal if you need fine-grained control over every aspect of your PyTorch tensor operations or are working with the original `koila` library, which is no longer maintained.

machine-learning-engineering data-science algorithm-development model-scalability explainable-AI
No Package No Dependents
Maintenance 20 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

1,829

Forks

64

Language

Python

License

MIT

Last pushed

Jan 18, 2026

Commits (30d)

28

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/rentruewang/koila"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.