mindspore-lab/mindnlp
MindSpore + 🤗Huggingface: Run any Transformers/Diffusers model on MindSpore with seamless compatibility and acceleration.
This project helps AI developers and researchers efficiently run a vast array of pre-trained AI models, including large language models and image generation models, on MindSpore with hardware acceleration. It takes existing HuggingFace model code and outputs accelerated model execution on various hardware like Ascend NPUs or NVIDIA GPUs, without requiring code changes. This is primarily used by machine learning engineers, data scientists, and AI researchers working with MindSpore.
913 stars. Actively maintained with 1 commit in the last 30 days.
Use this if you are a machine learning engineer or researcher looking to deploy and accelerate HuggingFace models on MindSpore, especially if you need native support for Ascend NPUs.
Not ideal if you are exclusively working with PyTorch or TensorFlow ecosystems and do not use MindSpore for model deployment or acceleration.
Stars
913
Forks
267
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 08, 2026
Commits (30d)
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mindspore-lab/mindnlp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
AI-Hypercomputer/maxtext
A simple, performant and scalable Jax LLM!
rasbt/reasoning-from-scratch
Implement a reasoning LLM in PyTorch from scratch, step by step
mosaicml/llm-foundry
LLM training code for Databricks foundation models
rickiepark/llm-from-scratch
<밑바닥부터 만들면서 공부하는 LLM>(길벗, 2025)의 코드 저장소
CASE-Lab-UMD/LLM-Drop
The official implementation of the paper "Uncovering the Redundancy in Transformers via a...