mingyuyng/Visual-Selective-VIO
Code for "Efficient Deep Visual and Inertial Odometry with Adaptive Visual Modality Selection", ECCV 2022
This project helps self-driving car engineers and robotics researchers accurately determine a vehicle's position and orientation. It takes raw camera images and Inertial Measurement Unit (IMU) data (accelerations and angular rates) as input. The output is a precise estimation of the vehicle's trajectory, allowing for better navigation and mapping.
147 stars. No commits in the last 6 months.
Use this if you need to track the precise movement of autonomous vehicles or robots using both visual and inertial sensor data, especially in scenarios where computational efficiency is critical.
Not ideal if your application doesn't involve autonomous navigation or requires real-time processing on highly resource-constrained devices without dedicated GPUs.
Stars
147
Forks
25
Language
Python
License
—
Category
Last pushed
Oct 19, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/mingyuyng/Visual-Selective-VIO"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
changh95/visual-slam-roadmap
Roadmap to become a Visual-SLAM developer in 2026
coperception/coperception
An SDK for multi-agent collaborative perception.
w111liang222/lidar-slam-detection
LSD (LiDAR SLAM & Detection) is an open source perception architecture for autonomous vehicle/robotic
ika-rwth-aachen/Cam2BEV
TensorFlow Implementation for Computing a Semantically Segmented Bird's Eye View (BEV) Image...
lvchuandong/Awesome-Multi-Camera-3D-Occupancy-Prediction
Awesome papers and code about Multi-Camera 3D Occupancy Prediction, such as TPVFormer,...