AbineshSivakumar/Llama-2-7B-QLoRA-Vicuna

This repository contains code to fine-tune a Llama-7B-Uncensored model using the Vicuna 70k dataset using Quantised Low Rank Adapations (LoRA).

27
/ 100
Experimental

This project helps AI developers adapt a Llama-2-7B-Uncensored language model to generate responses similar to the Vicuna model, using less computational resources. You input the base Llama-2-7B-Uncensored model and the Vicuna 70k conversation dataset, and it outputs a fine-tuned model ready for various chat-based applications. It's intended for AI engineers and researchers working with large language models.

No commits in the last 6 months.

Use this if you need to create a specialized Llama-2-7B-Uncensored model that generates human-like conversations, similar to Vicuna, while optimizing for resource efficiency.

Not ideal if you're looking for an out-of-the-box conversational AI model without any customization or fine-tuning, or if your primary goal isn't related to conversational AI.

AI-model-customization conversational-AI-development language-model-fine-tuning resource-efficient-AI natural-language-generation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

8

Forks

4

Language

Python

License

Last pushed

Oct 24, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AbineshSivakumar/Llama-2-7B-QLoRA-Vicuna"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.