SamerMakni/cuda-selector

A simple tool to select the optimal CUDA device based custom criteria.

29
/ 100
Experimental

This tool helps developers and machine learning engineers manage compute resources efficiently by automatically selecting the best available CUDA (or MPS) device. You provide criteria like desired memory, power, temperature, or utilization, and it outputs the identifier for the optimal device(s) to run your computational tasks. This is ideal for anyone running data processing, simulation, or AI/ML workloads on systems with multiple GPUs.

No commits in the last 6 months. Available on PyPI.

Use this if you need to programmatically choose the best GPU for your tasks based on its current performance metrics, ensuring your applications run optimally without manual device selection.

Not ideal if you only have one GPU or if your applications do not require dynamic GPU selection based on real-time metrics.

GPU-management resource-optimization deep-learning-infrastructure compute-scheduling MLOps
No License Stale 6m No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 17 / 25
Community 8 / 25

How are scores calculated?

Stars

8

Forks

1

Language

Python

License

Last pushed

Mar 10, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/SamerMakni/cuda-selector"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.