WangRongsheng/Aurora
The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"
Aurora helps practitioners who need to converse or process information in Chinese using advanced AI models. It takes general chat instructions or Chinese text queries as input and generates high-quality, contextually relevant Chinese responses. This is ideal for roles like content creators, customer service representatives, or data analysts who work with Chinese language data and need sophisticated conversational AI capabilities.
263 stars. No commits in the last 6 months.
Use this if you need an AI model that understands and generates nuanced Chinese text for conversational applications, built upon a powerful Mixture-of-Experts architecture.
Not ideal if your primary need is for non-Chinese language processing or if you require an AI for highly specialized, non-conversational tasks in other domains.
Stars
263
Forks
19
Language
Python
License
Apache-2.0
Category
Last pushed
May 09, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/WangRongsheng/Aurora"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
axolotl-ai-cloud/axolotl
Go ahead and axolotl questions
google/paxml
Pax is a Jax-based machine learning framework for training large scale models. Pax allows for...
JosefAlbers/PVM
Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon
iamarunbrahma/finetuned-qlora-falcon7b-medical
Finetuning of Falcon-7B LLM using QLoRA on Mental Health Conversational Dataset
h2oai/h2o-wizardlm
Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning