adityamwagh/pose-estimation-loftr

Pose estimation pipeline for 3D Reconstruction using LoFTR (Local Feature Transformer) detector free feature matcher.

25
/ 100
Experimental

This project helps computer vision practitioners accurately determine the relative position and orientation of cameras that captured two different images of the same scene. It takes a pair of images (like photos of a building from different angles) and identifies corresponding points between them, even under challenging conditions. The output is a 'fundamental matrix' that describes how the cameras are positioned relative to each other, which is crucial for tasks like building 3D models or stitching panoramas.

No commits in the last 6 months.

Use this if you need to precisely match features between diverse images to understand camera positions for 3D reconstruction, simultaneous localization and mapping (SLAM), or panoramic stitching.

Not ideal if your primary goal is basic image comparison or object detection rather than detailed geometric understanding of camera pose.

3D Reconstruction Structure from Motion (SfM) Computer Vision Robotics Photogrammetry
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 11 / 25

How are scores calculated?

Stars

23

Forks

3

Language

Jupyter Notebook

License

Last pushed

Mar 21, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/adityamwagh/pose-estimation-loftr"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.