kyegomez/SingLoRA

This repository provides a minimal, single-file implementation of SingLoRA (Single Matrix Low-Rank Adaptation) as described in the paper "SingLoRA: Low Rank Adaptation Using a Single Matrix" by Bensaïd et al.

47
/ 100
Emerging

This tool helps machine learning engineers efficiently fine-tune large language models for specific tasks without retraining the entire model. You provide a pre-trained transformer model (like DistilBERT or LLaMA) and specify which layers to adapt, and it outputs a modified model with significantly fewer trainable parameters. This is ideal for developers working on custom applications that need specialized models from general-purpose ones.

Available on PyPI.

Use this if you need to adapt a large pre-trained language model to a new dataset or task, but want to minimize computational resources and storage requirements during fine-tuning.

Not ideal if you are building a model from scratch, require full control over every parameter during training, or are not working with transformer-based architectures.

large-language-models model-fine-tuning natural-language-processing resource-optimization machine-learning-engineering
Maintenance 10 / 25
Adoption 8 / 25
Maturity 24 / 25
Community 5 / 25

How are scores calculated?

Stars

44

Forks

2

Language

Python

License

MIT

Last pushed

Mar 09, 2026

Commits (30d)

0

Dependencies

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/SingLoRA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.