jytime/Deep-SfM-Revisited
[CVPR 2021] Deep Two-View Structure-from-Motion Revisited
This project helps computer vision researchers and robotics engineers convert pairs of images into a 3D understanding of the scene. By inputting two images of a scene, it can output essential matrices, depth maps, and camera poses, enabling 3D reconstruction and motion estimation. It is ideal for those working with autonomous navigation or 3D scene understanding from image data.
190 stars. No commits in the last 6 months.
Use this if you need to determine the 3D structure and camera movement from two uncalibrated images, especially for autonomous driving or robotics applications.
Not ideal if you are looking for a plug-and-play solution without expertise in deep learning, PyTorch, or large-scale computer vision datasets like KITTI.
Stars
190
Forks
13
Language
Python
License
MIT
Category
Last pushed
Apr 23, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/jytime/Deep-SfM-Revisited"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
3DOM-FBK/deep-image-matching
Multiview matching with deep-learning and hand-crafted local features for COLMAP and other SfM...
suhangpro/mvcnn
Multi-view CNN (MVCNN) for shape recognition
zouchuhang/LayoutNet
Torch implementation of our CVPR 18 paper: "LayoutNet: Reconstructing the 3D Room Layout from a...
andyzeng/tsdf-fusion-python
Python code to fuse multiple RGB-D images into a TSDF voxel volume.
andyzeng/tsdf-fusion
Fuse multiple depth frames into a TSDF voxel volume.