Adlith/MoE-Jetpack

[NeurIPS 24] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks

28
/ 100
Experimental

This project helps machine learning engineers and researchers accelerate and improve the performance of their computer vision models. By taking an existing pre-trained vision model, it transforms it into a more efficient 'Mixture of Experts' (MoE) model. This results in faster training, higher accuracy, and better generalization across various image and video analysis tasks.

134 stars. No commits in the last 6 months.

Use this if you are building computer vision applications and want to achieve state-of-the-art performance with faster training times and improved model efficiency, especially for tasks involving image classification, object detection, or segmentation.

Not ideal if you are not working with pre-trained dense vision models or if your primary focus is on non-vision machine learning tasks.

computer-vision machine-learning-engineering model-optimization deep-learning image-recognition
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 2 / 25

How are scores calculated?

Stars

134

Forks

1

Language

Python

License

Apache-2.0

Last pushed

Nov 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Adlith/MoE-Jetpack"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.