kerrgarr/SemanticSegmentationCityscapes

A simple image segmentation model called ‘my_FCN’ is compared with a conventional U-Net architecture and DeepLabV3+ on a subset of the Cityscapes dataset.

32
/ 100
Emerging

This project helps urban planners, autonomous vehicle engineers, or smart city developers understand street scenes by segmenting images into meaningful categories like roads, cars, and pedestrians. It takes raw street view images and their corresponding pixel-level annotations as input, and outputs a trained model capable of identifying different objects within new urban images. The target user is anyone who needs to analyze or classify elements within urban street photography.

No commits in the last 6 months.

Use this if you need to experiment with and compare different semantic segmentation models for urban street scenes using readily available computational resources.

Not ideal if you need a pre-trained model for immediate deployment without model comparison or if your primary interest is object detection or instance segmentation.

urban-planning autonomous-vehicles smart-cities geographic-information-systems street-scene-analysis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

12

Forks

2

Language

Jupyter Notebook

License

GPL-3.0

Last pushed

Dec 04, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/kerrgarr/SemanticSegmentationCityscapes"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.