InternLM/xtuner

A Next-Generation Training Engine Built for Ultra-Large MoE Models

76
/ 100
Verified

XTuner V1 helps machine learning engineers and researchers efficiently train very large AI models, specifically those with Mixture-of-Experts (MoE) architectures. It takes large datasets and model configurations as input to produce powerful, highly-optimized AI models. This is ideal for those working with cutting-edge AI research or deploying state-of-the-art large language models.

5,096 stars. Actively maintained with 66 commits in the last 30 days. Available on PyPI.

Use this if you need to train ultra-large-scale AI models, especially MoE architectures, and require highly efficient training on extensive datasets and long sequences.

Not ideal if you are working with smaller AI models or do not have access to advanced GPU or NPU hardware for large-scale distributed training.

large-language-models AI-model-training deep-learning-research scalable-AI multimodal-AI
Maintenance 22 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 19 / 25

How are scores calculated?

Stars

5,096

Forks

405

Language

Python

License

Apache-2.0

Last pushed

Mar 13, 2026

Commits (30d)

66

Dependencies

15

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/InternLM/xtuner"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.