YanSte/NLP-LLM-Fine-tuning-Llame-2-QLoRA-2024

Natural Language Processing (NLP) and Large Language Models (LLM) with Fine-Tuning LLM QLoRA and Llama 2 in 2024

28
/ 100
Experimental

This project helps machine learning engineers or AI developers improve large language models (LLMs) like Llama 2 for specific, niche tasks. By using a technique called QLoRA, it shows how to take a general-purpose LLM and train it further with your own targeted dataset. The result is a specialized LLM that performs better for your unique use case, rather than relying on a model trained for broad applications.

No commits in the last 6 months.

Use this if you have a pre-trained Llama 2 model and a specific dataset, and you want to adapt the model to perform exceptionally well on tasks related to your data.

Not ideal if you are looking to train a large language model from scratch, as this project focuses on fine-tuning an existing one.

AI-development LLM-customization model-training natural-language-processing deep-learning
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

9

Forks

4

Language

Jupyter Notebook

License

Category

llm-fine-tuning

Last pushed

Jan 26, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/YanSte/NLP-LLM-Fine-tuning-Llame-2-QLoRA-2024"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.