fundamentalvision/BEVFormer

[ECCV 2022] This is the official implementation of BEVFormer, a camera-only framework for autonomous driving perception, e.g., 3D object detection and semantic map segmentation.

49
/ 100
Emerging

This project helps autonomous driving engineers process raw camera images from a vehicle to understand its surroundings. It takes a continuous stream of multi-camera video footage as input and outputs a clear, unified bird's-eye-view map. This map highlights important information like the location and type of other vehicles and pedestrians, and a segmentation of the road and obstacles, making it easier to develop safer and more robust self-driving systems.

4,356 stars. No commits in the last 6 months.

Use this if you are developing perception systems for autonomous vehicles and need to accurately detect 3D objects and segment semantic maps using only camera data, aiming for performance comparable to LiDAR-based systems.

Not ideal if your autonomous driving system relies primarily on LiDAR or radar for environmental perception, or if you need a solution for static image analysis rather than continuous video streams.

autonomous-driving vehicle-perception 3d-object-detection semantic-segmentation camera-vision
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

4,356

Forks

709

Language

Python

License

Apache-2.0

Last pushed

Aug 15, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/fundamentalvision/BEVFormer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.