prakash-aryan/qwen-arabic-project
This project fine-tunes the Qwen2-1.5B model for Arabic language tasks using Quantized LoRA (QLoRA).
This project helps AI developers and researchers create custom Arabic large language models without needing massive computing resources. It takes general Arabic text datasets and fine-tunes a base Qwen model, resulting in a specialized Arabic LLM that can be run on more modest hardware. Data scientists and machine learning engineers working on Arabic NLP applications would find this useful.
No commits in the last 6 months.
Use this if you need to build or adapt a capable Arabic language model with limited GPU memory (e.g., 4GB VRAM) for tasks like text classification, question answering, or dialect identification.
Not ideal if you require a pre-trained, production-ready Arabic LLM without any custom fine-tuning or model building on your part.
Stars
11
Forks
2
Language
Python
License
GPL-3.0
Category
Last pushed
May 05, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/prakash-aryan/qwen-arabic-project"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
axolotl-ai-cloud/axolotl
Go ahead and axolotl questions
google/paxml
Pax is a Jax-based machine learning framework for training large scale models. Pax allows for...
JosefAlbers/PVM
Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon
iamarunbrahma/finetuned-qlora-falcon7b-medical
Finetuning of Falcon-7B LLM using QLoRA on Mental Health Conversational Dataset
h2oai/h2o-wizardlm
Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning