NVlabs/FB-BEV
Official PyTorch implementation of FB-BEV & FB-OCC - Forward-backward view transformation for vision-centric autonomous driving perception
This helps autonomous driving engineers precisely understand the environment around a self-driving vehicle using only camera footage. It takes in raw camera video streams from the vehicle and outputs a detailed 3D map of objects and occupied spaces in the vehicle's surroundings. It's for engineers developing and testing autonomous driving systems.
783 stars. No commits in the last 6 months.
Use this if you need to enhance your autonomous driving system's ability to detect objects and predict occupied spaces from vision data alone, especially for robust real-world performance.
Not ideal if your autonomous driving system relies primarily on LiDAR or radar for environmental perception and you are not integrating camera-based 3D scene understanding.
Stars
783
Forks
69
Language
Python
License
—
Category
Last pushed
Mar 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/NVlabs/FB-BEV"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
changh95/visual-slam-roadmap
Roadmap to become a Visual-SLAM developer in 2026
coperception/coperception
An SDK for multi-agent collaborative perception.
w111liang222/lidar-slam-detection
LSD (LiDAR SLAM & Detection) is an open source perception architecture for autonomous vehicle/robotic
ika-rwth-aachen/Cam2BEV
TensorFlow Implementation for Computing a Semantically Segmented Bird's Eye View (BEV) Image...
lvchuandong/Awesome-Multi-Camera-3D-Occupancy-Prediction
Awesome papers and code about Multi-Camera 3D Occupancy Prediction, such as TPVFormer,...