andyzeng/tsdf-fusion
Fuse multiple depth frames into a TSDF voxel volume.
This tool helps researchers and engineers create detailed 3D models of real-world objects or environments. By taking multiple depth sensor readings (like those from a Kinect camera) as input, it builds a complete 3D surface mesh or point cloud as output. It is used by professionals working on 3D reconstruction, robotics, or augmented reality applications.
814 stars. No commits in the last 6 months.
Use this if you need to combine several depth camera views to reconstruct a single, high-quality 3D shape or scene.
Not ideal if you don't have access to an NVIDIA GPU, or if you only have a single depth map and don't need to fuse multiple views.
Stars
814
Forks
136
Language
Cuda
License
BSD-2-Clause
Category
Last pushed
May 07, 2019
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/andyzeng/tsdf-fusion"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
3DOM-FBK/deep-image-matching
Multiview matching with deep-learning and hand-crafted local features for COLMAP and other SfM...
suhangpro/mvcnn
Multi-view CNN (MVCNN) for shape recognition
zouchuhang/LayoutNet
Torch implementation of our CVPR 18 paper: "LayoutNet: Reconstructing the 3D Room Layout from a...
andyzeng/tsdf-fusion-python
Python code to fuse multiple RGB-D images into a TSDF voxel volume.
google/stereo-magnification
Code accompanying the SIGGRAPH 2018 paper "Stereo Magnification: Learning View Synthesis using...