ParCIS/Chimera

Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.

38
/ 100
Emerging

This project helps machine learning researchers and engineers train very large neural networks more efficiently. It takes your prepared dataset (like Wikipedia text for BERT models) and uses specialized techniques to distribute the training workload across multiple GPUs. The output is a more quickly trained, large-scale neural network model, ready for deployment or further research. This is for professionals working with state-of-the-art deep learning models.

No commits in the last 6 months.

Use this if you are a machine learning researcher or engineer struggling with the time and computational resources required to train extremely large neural networks.

Not ideal if you are working with smaller models, do not have access to a multi-GPU cluster managed by SLURM, or are not already comfortable with advanced distributed training concepts.

deep-learning-research large-scale-model-training distributed-machine-learning neural-network-efficiency AI-model-development
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

70

Forks

9

Language

Python

License

GPL-3.0

Last pushed

Mar 20, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ParCIS/Chimera"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.