brandesjj/centerfusionpp
CenterFusion++ is a frustum propsal-based camera and radar sensor fusion network.
This project helps self-driving car engineers improve how accurately autonomous vehicles detect objects around them. It combines data from both cameras and radar sensors to identify objects like cars, pedestrians, and cyclists, even in challenging conditions such as rain, fog, or night. By taking raw camera images and radar point clouds as input, it produces precise locations and speeds of detected objects, enhancing the vehicle's perception system.
No commits in the last 6 months.
Use this if you need to develop or enhance an autonomous driving perception system that robustly identifies objects using both camera and radar data.
Not ideal if your application doesn't involve self-driving vehicles or if you only work with a single sensor type (e.g., only camera or only LiDAR).
Stars
74
Forks
7
Language
Python
License
MIT
Category
Last pushed
Oct 10, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/brandesjj/centerfusionpp"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
changh95/visual-slam-roadmap
Roadmap to become a Visual-SLAM developer in 2026
coperception/coperception
An SDK for multi-agent collaborative perception.
w111liang222/lidar-slam-detection
LSD (LiDAR SLAM & Detection) is an open source perception architecture for autonomous vehicle/robotic
ika-rwth-aachen/Cam2BEV
TensorFlow Implementation for Computing a Semantically Segmented Bird's Eye View (BEV) Image...
lvchuandong/Awesome-Multi-Camera-3D-Occupancy-Prediction
Awesome papers and code about Multi-Camera 3D Occupancy Prediction, such as TPVFormer,...