fshnkarimi/Fine-tuning-an-LLM-using-LoRA
📚 Text Classification with LoRA (Low-Rank Adaptation) of Language Models - Efficiently fine-tune large language models for text classification tasks using the Stanford Sentiment Treebank (SST-2) dataset and the LoRA technique.
This helps machine learning engineers efficiently customize large language models for specific text classification needs, like sentiment analysis. You provide a general-purpose language model and your task-specific text data, and it outputs a more accurate, specialized model capable of categorizing new text. This is designed for ML practitioners who work with language models and need to adapt them without extensive computational resources.
No commits in the last 6 months.
Use this if you need to fine-tune a large language model for a text classification task, such as sentiment analysis, and want to do so efficiently with limited computational resources.
Not ideal if you are not a machine learning practitioner or if your task doesn't involve adapting large language models for text classification.
Stars
55
Forks
8
Language
Jupyter Notebook
License
—
Category
Last pushed
Sep 22, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/fshnkarimi/Fine-tuning-an-LLM-using-LoRA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unslothai/unsloth
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama,...
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
modelscope/ms-swift
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5,...
oumi-ai/oumi
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
linkedin/Liger-Kernel
Efficient Triton Kernels for LLM Training