mingyuyng/Visual-Selective-VIO

Code for "Efficient Deep Visual and Inertial Odometry with Adaptive Visual Modality Selection", ECCV 2022

37
/ 100
Emerging

This project helps self-driving car engineers and robotics researchers accurately determine a vehicle's position and orientation. It takes raw camera images and Inertial Measurement Unit (IMU) data (accelerations and angular rates) as input. The output is a precise estimation of the vehicle's trajectory, allowing for better navigation and mapping.

147 stars. No commits in the last 6 months.

Use this if you need to track the precise movement of autonomous vehicles or robots using both visual and inertial sensor data, especially in scenarios where computational efficiency is critical.

Not ideal if your application doesn't involve autonomous navigation or requires real-time processing on highly resource-constrained devices without dedicated GPUs.

autonomous-driving robotics vehicle-localization sensor-fusion motion-tracking
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 19 / 25

How are scores calculated?

Stars

147

Forks

25

Language

Python

License

Last pushed

Oct 19, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/mingyuyng/Visual-Selective-VIO"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.