AbineshSivakumar/Llama-2-7B-QLoRA-Vicuna
This repository contains code to fine-tune a Llama-7B-Uncensored model using the Vicuna 70k dataset using Quantised Low Rank Adapations (LoRA).
This project helps AI developers adapt a Llama-2-7B-Uncensored language model to generate responses similar to the Vicuna model, using less computational resources. You input the base Llama-2-7B-Uncensored model and the Vicuna 70k conversation dataset, and it outputs a fine-tuned model ready for various chat-based applications. It's intended for AI engineers and researchers working with large language models.
No commits in the last 6 months.
Use this if you need to create a specialized Llama-2-7B-Uncensored model that generates human-like conversations, similar to Vicuna, while optimizing for resource efficiency.
Not ideal if you're looking for an out-of-the-box conversational AI model without any customization or fine-tuning, or if your primary goal isn't related to conversational AI.
Stars
8
Forks
4
Language
Python
License
—
Category
Last pushed
Oct 24, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AbineshSivakumar/Llama-2-7B-QLoRA-Vicuna"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unslothai/unsloth
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama,...
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
modelscope/ms-swift
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5,...
oumi-ai/oumi
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
linkedin/Liger-Kernel
Efficient Triton Kernels for LLM Training