llcuda/llcuda
CUDA 12-first backend inference for Unsloth on Kaggle — Optimized for small GGUF models (1B-5B) on dual Tesla T4 GPUs (15GB each, SM 7.5)
This project helps data scientists, machine learning engineers, and researchers efficiently run small to medium-sized AI language models (1B-5B parameters) on Kaggle's dual Tesla T4 GPU environments. It takes a pre-trained or fine-tuned language model in GGUF format and provides a fast inference engine, outputting generated text responses. You would use this if you're working with AI models and need optimized performance and resource allocation on Kaggle.
Use this if you need to run small GGUF language models (1B-5B parameters) quickly and efficiently on Kaggle's dual Tesla T4 GPUs, especially if you also want to use one GPU for visualization while the other handles model inference.
Not ideal if you are working with very large language models (e.g., >70B parameters) or if you are not operating within a Kaggle dual Tesla T4 GPU environment.
Stars
8
Forks
1
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Feb 01, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/llcuda/llcuda"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
quic/efficient-transformers
This library empowers users to seamlessly port pretrained models and checkpoints on the...
ManuelSLemos/RabbitLLM
Run 70B+ LLMs on a single 4GB GPU — no quantization required.
alpa-projects/alpa
Training and serving large-scale neural networks with auto parallelization.
arm-education/Advanced-AI-Hardware-Software-Co-Design
Hands-on course materials for ML engineers to master extreme model quantization and on-device...
IST-DASLab/marlin
FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes...