silviutroscot/CodeSLAM

Implementation of CodeSLAM — Learning a Compact, Optimisable Representation for Dense Visual SLAM paper (https://arxiv.org/pdf/1804.00874.pdf)

47
/ 100
Emerging

This project helps roboticists and autonomous system developers build real-time visual navigation systems. It takes a sequence of monocular camera images as input and processes them to generate a continuously updated map of the environment and the camera's precise location within it. The primary users are researchers and engineers working on mobile robots, drones, or any application requiring simultaneous localization and mapping from a single camera.

208 stars.

Use this if you need to determine a camera's position and build a 3D map of its surroundings using only a single video stream, especially for applications like robot navigation or augmented reality.

Not ideal if you require an absolute scale for your map without additional sensor input, or if your application cannot tolerate occasional drift inherent in monocular vision systems.

robotics computer-vision autonomous-navigation SLAM 3D-reconstruction
No License No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

208

Forks

24

Language

Python

License

Last pushed

Mar 26, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/silviutroscot/CodeSLAM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.