wasidennis/AdaptSegNet

Learning to Adapt Structured Output Space for Semantic Segmentation, CVPR 2018 (spotlight)

43
/ 100
Emerging

This project helps computer vision researchers adapt existing semantic segmentation models, trained on synthetic images (like GTA5), to perform accurately on real-world images (like Cityscapes). You input pre-trained models and image datasets from both synthetic and real environments, and it outputs a more robust segmentation model capable of accurately identifying objects in real-world scenes. This is for researchers in computer vision or autonomous driving who need to improve model performance across different visual domains without extensive real-world labeling.

861 stars. No commits in the last 6 months.

Use this if you need to improve the accuracy of semantic segmentation models on real-world image datasets, especially when your initial training data is synthetic and you want to avoid costly real-world data labeling.

Not ideal if you are looking for an out-of-the-box solution for general image classification or object detection, or if you don't have access to both synthetic and real-world image datasets for domain adaptation.

semantic-segmentation domain-adaptation autonomous-driving computer-vision-research image-analysis
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 25 / 25

How are scores calculated?

Stars

861

Forks

207

Language

Python

License

Last pushed

Aug 14, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/wasidennis/AdaptSegNet"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.