silviutroscot/CodeSLAM
Implementation of CodeSLAM — Learning a Compact, Optimisable Representation for Dense Visual SLAM paper (https://arxiv.org/pdf/1804.00874.pdf)
This project helps roboticists and autonomous system developers build real-time visual navigation systems. It takes a sequence of monocular camera images as input and processes them to generate a continuously updated map of the environment and the camera's precise location within it. The primary users are researchers and engineers working on mobile robots, drones, or any application requiring simultaneous localization and mapping from a single camera.
208 stars.
Use this if you need to determine a camera's position and build a 3D map of its surroundings using only a single video stream, especially for applications like robot navigation or augmented reality.
Not ideal if you require an absolute scale for your map without additional sensor input, or if your application cannot tolerate occasional drift inherent in monocular vision systems.
Stars
208
Forks
24
Language
Python
License
—
Category
Last pushed
Mar 26, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/silviutroscot/CodeSLAM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
alicevision/AliceVision
3D Computer Vision Framework
colmap/colmap
COLMAP - Structure-from-Motion and Multi-View Stereo
ANTsX/ANTs
Advanced Normalization Tools (ANTs)
alicevision/Meshroom
Node-based Visual Programming Toolbox
MOLAorg/mola
A Modular Optimization framework for Localization and mApping (MOLA)