ValerioSpagnoli/Monocular-Visual-Inertial-MSCKF
Multi-State Constraint Kalman Filter for Monocular Visual-Inertial Navigation.
This project helps robots, drones, or augmented reality systems understand their position and orientation in real-time, especially in environments where GPS might be unavailable or unreliable. It takes a stream of images from a single camera and simulated motion data (from an Inertial Measurement Unit, or IMU) as input. The output is a highly accurate and computationally efficient estimation of the system's movement and location, useful for navigation and spatial awareness. Robotics engineers and AR/VR developers would primarily use this.
Use this if you need precise, real-time tracking of a system's 3D pose using only a single camera and IMU, particularly in resource-constrained environments.
Not ideal if your application requires global mapping or long-term drift-free localization without any form of loop closure or external corrections.
Stars
13
Forks
—
Language
Python
License
GPL-3.0
Category
Last pushed
Oct 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/ValerioSpagnoli/Monocular-Visual-Inertial-MSCKF"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
changh95/visual-slam-roadmap
Roadmap to become a Visual-SLAM developer in 2026
coperception/coperception
An SDK for multi-agent collaborative perception.
w111liang222/lidar-slam-detection
LSD (LiDAR SLAM & Detection) is an open source perception architecture for autonomous vehicle/robotic
ika-rwth-aachen/Cam2BEV
TensorFlow Implementation for Computing a Semantically Segmented Bird's Eye View (BEV) Image...
lvchuandong/Awesome-Multi-Camera-3D-Occupancy-Prediction
Awesome papers and code about Multi-Camera 3D Occupancy Prediction, such as TPVFormer,...