cvg/nice-slam
[CVPR'22] NICE-SLAM: Neural Implicit Scalable Encoding for SLAM
This project helps robotics engineers and researchers create detailed 3D maps of indoor environments using video from a moving camera. It takes a sequence of color and depth images (RGB-D video) captured by a camera moving through a space and outputs an accurate, dense 3D mesh model of the scene and the camera's precise path. This is ideal for anyone developing autonomous robots or augmented reality applications that need to understand and navigate physical spaces.
1,569 stars. No commits in the last 6 months.
Use this if you need to generate highly accurate 3D maps and track camera movement simultaneously for indoor environments from RGB-D video input.
Not ideal if you are working with outdoor scenes, require real-time mapping on constrained hardware, or only have standard RGB video without depth information.
Stars
1,569
Forks
209
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 10, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/cvg/nice-slam"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
changh95/visual-slam-roadmap
Roadmap to become a Visual-SLAM developer in 2026
coperception/coperception
An SDK for multi-agent collaborative perception.
w111liang222/lidar-slam-detection
LSD (LiDAR SLAM & Detection) is an open source perception architecture for autonomous vehicle/robotic
ika-rwth-aachen/Cam2BEV
TensorFlow Implementation for Computing a Semantically Segmented Bird's Eye View (BEV) Image...
lvchuandong/Awesome-Multi-Camera-3D-Occupancy-Prediction
Awesome papers and code about Multi-Camera 3D Occupancy Prediction, such as TPVFormer,...