cvg/nicer-slam
[3DV'24 Best Paper Honorable Mention] NICER-SLAM: Neural Implicit Scene Encoding for RGB SLAM
This project helps robotics engineers and researchers precisely map and track environments using standard video footage. It takes a sequence of RGB images and outputs accurate 3D geometry of the scene and the camera's movement path within it. This is valuable for developing autonomous navigation, augmented reality, and 3D reconstruction systems.
215 stars. No commits in the last 6 months.
Use this if you need to generate highly accurate 3D models and camera motion paths from video without relying on specialized depth sensors.
Not ideal if you require real-time processing on embedded systems with limited computational resources.
Stars
215
Forks
18
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 20, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/cvg/nicer-slam"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
changh95/visual-slam-roadmap
Roadmap to become a Visual-SLAM developer in 2026
coperception/coperception
An SDK for multi-agent collaborative perception.
w111liang222/lidar-slam-detection
LSD (LiDAR SLAM & Detection) is an open source perception architecture for autonomous vehicle/robotic
ika-rwth-aachen/Cam2BEV
TensorFlow Implementation for Computing a Semantically Segmented Bird's Eye View (BEV) Image...
lvchuandong/Awesome-Multi-Camera-3D-Occupancy-Prediction
Awesome papers and code about Multi-Camera 3D Occupancy Prediction, such as TPVFormer,...