GeeeekExplorer/transformers-patch

patches for huggingface transformers to save memory

35
/ 100
Emerging

This helps AI engineers and researchers reduce the memory footprint when working with large language models built using HuggingFace Transformers. By simply importing a patch, you can load and run larger models or process longer sequences of text with the same GPU resources, avoiding 'out of memory' errors. It takes existing Transformers models as input and allows them to operate more efficiently within your available hardware.

No commits in the last 6 months.

Use this if you are an AI engineer or researcher experiencing GPU memory limitations when running or fine-tuning large language models from HuggingFace Transformers.

Not ideal if you are not working with HuggingFace Transformers models or if your primary bottleneck is not GPU memory.

Large Language Models GPU Optimization AI Engineering Deep Learning Infrastructure Model Deployment
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 11 / 25

How are scores calculated?

Stars

35

Forks

4

Language

Python

License

MIT

Last pushed

Jun 02, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/GeeeekExplorer/transformers-patch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.