pjlab-sys4nlp/llama-moe
⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)
This project offers more affordable and efficient large language models (LLMs) by building 'Mixture-of-Experts' (MoE) versions of existing LLaMA models. It takes a LLaMA base model and specific datasets, then outputs smaller, faster MoE models that perform similarly to their larger counterparts. This is ideal for machine learning engineers, researchers, and data scientists looking to deploy powerful language models with reduced computational resources.
1,002 stars. No commits in the last 6 months.
Use this if you need to run powerful LLaMA-based language models but are limited by computational resources or budget.
Not ideal if you require the absolute largest LLaMA model available without any modifications for efficiency, or if you're not comfortable with model fine-tuning and deployment.
Stars
1,002
Forks
62
Language
Python
License
Apache-2.0
Category
Last pushed
Dec 06, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/pjlab-sys4nlp/llama-moe"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
galilai-group/stable-pretraining
Reliable, minimal and scalable library for pretraining foundation and world models
CognitiveAISystems/MAPF-GPT
[AAAI-2025] This repository contains MAPF-GPT, a deep learning-based model for solving MAPF...
UKPLab/gpl
Powerful unsupervised domain adaptation method for dense retrieval. Requires only unlabeled...
larslorch/avici
Amortized Inference for Causal Structure Learning, NeurIPS 2022
svdrecbd/mhc-mlx
MLX + Metal implementation of mHC: Manifold-Constrained Hyper-Connections by DeepSeek-AI.