clearml/clearml-fractional-gpu

ClearML Fractional GPU - Run multiple containers on the same GPU with driver level memory limitation ✨ and compute time-slicing

44
/ 100
Emerging

This project helps AI developers and researchers efficiently share powerful GPUs among multiple users or AI workloads. It takes your existing AI models or training jobs, packaged as Docker containers, and allows them to run concurrently on the same GPU without one job monopolizing the resources. The output is a more cost-effective and highly utilized GPU infrastructure for AI development.

Use this if you need to run multiple AI workloads or experiments simultaneously on a single GPU, ensuring each container-based job gets a fair share of GPU memory and compute time.

Not ideal if your AI workloads cannot be containerized or if you exclusively use statically partitioned high-end GPUs like NVIDIA MIG without needing dynamic adjustments.

AI-development MLOps resource-management GPU-utilization deep-learning-infrastructure
No Package No Dependents
Maintenance 10 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

90

Forks

6

Language

License

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/clearml/clearml-fractional-gpu"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.