prakash-aryan/qwen-arabic-project

This project fine-tunes the Qwen2-1.5B model for Arabic language tasks using Quantized LoRA (QLoRA).

35
/ 100
Emerging

This project helps AI developers and researchers create custom Arabic large language models without needing massive computing resources. It takes general Arabic text datasets and fine-tunes a base Qwen model, resulting in a specialized Arabic LLM that can be run on more modest hardware. Data scientists and machine learning engineers working on Arabic NLP applications would find this useful.

No commits in the last 6 months.

Use this if you need to build or adapt a capable Arabic language model with limited GPU memory (e.g., 4GB VRAM) for tasks like text classification, question answering, or dialect identification.

Not ideal if you require a pre-trained, production-ready Arabic LLM without any custom fine-tuning or model building on your part.

Arabic NLP LLM fine-tuning resource-efficient AI language model adaptation AI development
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

11

Forks

2

Language

Python

License

GPL-3.0

Category

llm-fine-tuning

Last pushed

May 05, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/prakash-aryan/qwen-arabic-project"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.