maryam089/SDViT

Official repository for "Self-Distilled Vision Transformer for Domain Generalization" (ACCV-2022 ORAL)

38
/ 100
Emerging

This project offers an improved way to train Vision Transformer models so they can accurately classify images even when the image style, background, or other features are significantly different from what they were initially trained on. It takes existing image datasets and Vision Transformer models, applies a self-distillation technique, and outputs a more robustly trained model ready for real-world deployment. Scientists, machine learning engineers, and researchers working with image classification in varied environments would use this.

No commits in the last 6 months.

Use this if you need your image classification models to perform reliably on new, unseen image data distributions, without requiring retraining on every new scenario.

Not ideal if your image data distribution is always consistent and doesn't vary much from your training data.

image-classification computer-vision machine-learning-research model-robustness out-of-distribution-detection
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

42

Forks

7

Language

Python

License

MIT

Last pushed

Dec 02, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/maryam089/SDViT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.