bradyz/cross_view_transformers

Cross-view Transformers for real-time Map-view Semantic Segmentation (CVPR 2022 Oral)

46
/ 100
Emerging

This project helps self-driving car developers and researchers convert raw camera footage from a vehicle into a detailed, bird's-eye-view semantic map in real-time. It takes multiple camera images and the vehicle's position as input and produces a segmented map showing different elements like roads, lanes, and pedestrians. This is ideal for those working on autonomous navigation, perception, and environmental understanding for self-driving vehicles.

573 stars. No commits in the last 6 months.

Use this if you need to generate highly accurate, real-time semantic maps from vehicle camera inputs for autonomous driving applications.

Not ideal if you are looking for a general-purpose image segmentation tool or if your primary input is not multi-view vehicle camera data.

autonomous-driving robotics real-time-mapping semantic-segmentation vehicle-perception
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

573

Forks

83

Language

Python

License

MIT

Last pushed

Nov 06, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/bradyz/cross_view_transformers"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.