jac99/MinkLocMultimodal

MinkLoc++: Lidar and Monocular Image Fusion for Place Recognition

36
/ 100
Emerging

This project helps autonomous vehicles and robots understand where they are by combining information from LiDAR scans and camera images. It takes raw 3D point clouds from LiDAR and corresponding RGB camera images as input, processes them, and outputs a unique descriptor for a specific location. This descriptor can then be used by roboticists or autonomous vehicle engineers for tasks like recognizing previously visited places, re-localizing the vehicle, or achieving loop closure.

113 stars. No commits in the last 6 months.

Use this if you are developing navigation systems for robots or autonomous vehicles and need a robust way to identify locations using both 3D perception and visual data.

Not ideal if your application relies solely on single-modality data (only LiDAR or only camera images) or if you are working with static, non-navigational scene understanding.

autonomous-vehicles robotics localization place-recognition sensor-fusion
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

113

Forks

10

Language

Python

License

MIT

Last pushed

Jan 28, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/jac99/MinkLocMultimodal"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.