mchancan/deepseqslam
The Official Deep Learning Framework for Robot Place Learning
This tool helps robots recognize places and navigate routes accurately, even when environmental conditions like lighting change significantly. It takes in visual data (images) and corresponding positional information from a robot's journey, and outputs an understanding of where the robot is along a previously traversed route. This is designed for robotics researchers and engineers working on autonomous vehicles and simultaneous localization and mapping (SLAM).
No commits in the last 6 months.
Use this if you need robots to reliably recognize locations over time, especially in dynamic environments with varied lighting or weather conditions.
Not ideal if your application does not involve sequential visual and positional data for robot navigation or if you are not comfortable with deep learning frameworks.
Stars
96
Forks
13
Language
Jupyter Notebook
License
GPL-3.0
Category
Last pushed
Dec 18, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mchancan/deepseqslam"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
AndreiBarsan/DynSLAM
Master's Thesis on Simultaneous Localization and Mapping in dynamic environments. Separately...
gradslam/gradslam
gradslam is an open source differentiable dense SLAM library for PyTorch
jbwang1997/OPUS
OPUS: Occupancy Prediction Using a Sparse Set
ai4ce/DiscoNet
[NeurIPS2021] Learning Distilled Collaboration Graph for Multi-Agent Perception
KwanWaiPang/Awesome-Transformer-based-SLAM
Paper Survey for Transformer-based SLAM