Matnay/Sensor_Fusion_Object_Detection_KPIT
Fusion of LiDAR and depth camera data with deep learning for object detection and classification
This project helps autonomous vehicle engineers and researchers create a robust understanding of their environment. It takes raw data from LiDAR, radar, and monocular cameras, processes it to detect and classify objects, and outputs these detections for use in real-time navigation and perception systems. Autonomous vehicle perception engineers and researchers would use this.
No commits in the last 6 months.
Use this if you are developing or researching autonomous driving systems and need to combine data from multiple sensors (LiDAR, radar, camera) for reliable object detection.
Not ideal if you are looking for a general-purpose object detection tool not specific to autonomous driving or if you don't use the Robot Operating System (ROS).
Stars
39
Forks
8
Language
C++
License
—
Category
Last pushed
Apr 02, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/Matnay/Sensor_Fusion_Object_Detection_KPIT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
roboflow/rf-detr
[ICLR 2026] RF-DETR is a real-time object detection and segmentation model architecture...
stereolabs/zed-sdk
⚡️The spatial perception framework for rapidly building smart robots and spaces
mikel-brostrom/boxmot
BoxMOT: Pluggable SOTA multi-object tracking modules with support for axis-aligned and oriented...
RizwanMunawar/yolov7-object-tracking
YOLOv7 Object Tracking Using PyTorch, OpenCV and Sort Tracking
google-deepmind/tapnet
Tracking Any Point (TAP)