kyegomez/SingLoRA
This repository provides a minimal, single-file implementation of SingLoRA (Single Matrix Low-Rank Adaptation) as described in the paper "SingLoRA: Low Rank Adaptation Using a Single Matrix" by Bensaïd et al.
This tool helps machine learning engineers efficiently fine-tune large language models for specific tasks without retraining the entire model. You provide a pre-trained transformer model (like DistilBERT or LLaMA) and specify which layers to adapt, and it outputs a modified model with significantly fewer trainable parameters. This is ideal for developers working on custom applications that need specialized models from general-purpose ones.
Available on PyPI.
Use this if you need to adapt a large pre-trained language model to a new dataset or task, but want to minimize computational resources and storage requirements during fine-tuning.
Not ideal if you are building a model from scratch, require full control over every parameter during training, or are not working with transformer-based architectures.
Stars
44
Forks
2
Language
Python
License
MIT
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Dependencies
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/SingLoRA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hassancs91/SimplerLLM
Simplify interactions with Large Language Models
tylerelyt/LLM-Workshop
🌟 Learn Large Language Model development through hands-on projects and real-world implementations
avilum/minrlm
Token-efficient Recursive Language Model. 3.6x fewer tokens than vanilla LLMs. Data never enters...
NetEase-Media/grps_trtllm
Higher performance OpenAI LLM service than vLLM serve: A pure C++ high-performance OpenAI LLM...
parvbhullar/superpilot
LLMs based multi-model framework for building AI apps.