adityamwagh/SuperSLAM
SuperSLAM: Open Source Framework for Deep Learning based Visual SLAM (Work in Progress)
SuperSLAM helps developers integrate state-of-the-art visual Simultaneous Localization and Mapping (SLAM) capabilities into their robotics or autonomous system projects. It takes camera sensor data (like video streams from mono or stereo cameras) and outputs information about the environment's structure and the system's precise location within it. This framework is ideal for robotics engineers, autonomous vehicle developers, and researchers working on navigation and mapping.
155 stars.
Use this if you are a developer building autonomous systems that need to accurately map environments and track their position using camera input.
Not ideal if you are looking for an out-of-the-box application for end-users, as this is a foundational framework for developers.
Stars
155
Forks
16
Language
C++
License
LGPL-2.1
Category
Last pushed
Jan 24, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/adityamwagh/SuperSLAM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
changh95/visual-slam-roadmap
Roadmap to become a Visual-SLAM developer in 2026
coperception/coperception
An SDK for multi-agent collaborative perception.
w111liang222/lidar-slam-detection
LSD (LiDAR SLAM & Detection) is an open source perception architecture for autonomous vehicle/robotic
ika-rwth-aachen/Cam2BEV
TensorFlow Implementation for Computing a Semantically Segmented Bird's Eye View (BEV) Image...
lvchuandong/Awesome-Multi-Camera-3D-Occupancy-Prediction
Awesome papers and code about Multi-Camera 3D Occupancy Prediction, such as TPVFormer,...