xcyan/nips16_PTN
Torch Implementation of NIPS'16 paper: Perspective Transformer Nets
This project helps computer vision researchers and 3D modelers reconstruct a 3D shape from a single 2D image. You input a standard 2D image of an object, and it outputs a 3D volumetric representation of that object. This is ideal for those working on computer graphics, robotics, or augmented reality applications.
143 stars. No commits in the last 6 months.
Use this if you need to generate 3D models from single 2D photos, especially when full 3D supervision data is scarce.
Not ideal if you are looking for a plug-and-play solution without expertise in deep learning frameworks like Torch or handling datasets.
Stars
143
Forks
31
Language
Lua
License
MIT
Category
Last pushed
Nov 01, 2020
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/xcyan/nips16_PTN"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
openspyrit/spyrit
A Python toolbox for deep image reconstruction, with emphasis on single-pixel imaging.
RobotLocomotion/pytorch-dense-correspondence
Code for "Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation"
Fyusion/LLFF
Code release for Local Light Field Fusion at SIGGRAPH 2019
pmh47/dirt
DIRT: a fast differentiable renderer for TensorFlow
marrlab/SHAPR_torch
SHAPR: Code for "Capturing Shape Information with Multi-Scale Topological Loss Terms for 3D...