mchancan/deepseqslam

The Official Deep Learning Framework for Robot Place Learning

40
/ 100
Emerging

This tool helps robots recognize places and navigate routes accurately, even when environmental conditions like lighting change significantly. It takes in visual data (images) and corresponding positional information from a robot's journey, and outputs an understanding of where the robot is along a previously traversed route. This is designed for robotics researchers and engineers working on autonomous vehicles and simultaneous localization and mapping (SLAM).

No commits in the last 6 months.

Use this if you need robots to reliably recognize locations over time, especially in dynamic environments with varied lighting or weather conditions.

Not ideal if your application does not involve sequential visual and positional data for robot navigation or if you are not comfortable with deep learning frameworks.

robotics autonomous-navigation place-recognition SLAM visual-localization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

96

Forks

13

Language

Jupyter Notebook

License

GPL-3.0

Last pushed

Dec 18, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mchancan/deepseqslam"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.