changh95/visual-slam-roadmap
Roadmap to become a Visual-SLAM developer in 2026
This roadmap helps aspiring Visual-SLAM engineers and researchers understand the field. It takes you from foundational concepts like programming and mathematics, through various types of SLAM systems, using cameras and other sensors to build real-time 3D maps and localize devices. Anyone looking to specialize in robotics, autonomous navigation, or augmented reality development will find this guide useful.
1,635 stars.
Use this if you are a beginner interested in becoming a Visual-SLAM developer and need a structured guide on where to start and what topics to master.
Not ideal if you are looking for a practical, hands-on coding tutorial for a specific Visual-SLAM library or a deep dive into advanced research topics without foundational knowledge.
Stars
1,635
Forks
154
Language
—
License
MIT
Category
Last pushed
Feb 25, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/changh95/visual-slam-roadmap"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
coperception/coperception
An SDK for multi-agent collaborative perception.
w111liang222/lidar-slam-detection
LSD (LiDAR SLAM & Detection) is an open source perception architecture for autonomous vehicle/robotic
ika-rwth-aachen/Cam2BEV
TensorFlow Implementation for Computing a Semantically Segmented Bird's Eye View (BEV) Image...
lvchuandong/Awesome-Multi-Camera-3D-Occupancy-Prediction
Awesome papers and code about Multi-Camera 3D Occupancy Prediction, such as TPVFormer,...
fundamentalvision/BEVFormer
[ECCV 2022] This is the official implementation of BEVFormer, a camera-only framework for...