w111liang222/lidar-slam-detection
LSD (LiDAR SLAM & Detection) is an open source perception architecture for autonomous vehicle/robotic
This project helps robotics engineers and autonomous vehicle developers process sensor data to understand their environment. It takes inputs from multiple sensors like LiDAR, cameras, radar, and IMUs to create detailed 3D maps, detect objects, and track their movement in real-time. The output is a comprehensive environmental perception system for navigation and decision-making.
723 stars. No commits in the last 6 months.
Use this if you need a robust, real-time solution for 3D mapping, object detection, and tracking in autonomous systems or robotics.
Not ideal if you are looking for a simple, pre-trained model for basic image recognition or a system without multi-sensor fusion requirements.
Stars
723
Forks
153
Language
C++
License
Apache-2.0
Category
Last pushed
Jan 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/w111liang222/lidar-slam-detection"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
changh95/visual-slam-roadmap
Roadmap to become a Visual-SLAM developer in 2026
coperception/coperception
An SDK for multi-agent collaborative perception.
ika-rwth-aachen/Cam2BEV
TensorFlow Implementation for Computing a Semantically Segmented Bird's Eye View (BEV) Image...
lvchuandong/Awesome-Multi-Camera-3D-Occupancy-Prediction
Awesome papers and code about Multi-Camera 3D Occupancy Prediction, such as TPVFormer,...
fundamentalvision/BEVFormer
[ECCV 2022] This is the official implementation of BEVFormer, a camera-only framework for...