anuragranj/cc
Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation
This project helps computer vision researchers analyze video sequences without human-labeled data. It takes in raw video footage from sources like car-mounted cameras (e.g., KITTI, Cityscapes datasets) and automatically outputs information about the scene, including the depth of objects, how the camera is moving, the movement of individual objects within the scene (optical flow), and which parts of the scene are distinct moving objects. Computer vision scientists and engineers working on autonomous vehicles or robotics would find this useful for training and evaluating models.
531 stars. No commits in the last 6 months.
Use this if you need to extract detailed motion and depth information from unlabeled video footage to train or evaluate computer vision models.
Not ideal if you are looking for a pre-packaged solution for a specific application, as this requires technical expertise to set up and run.
Stars
531
Forks
62
Language
Python
License
MIT
Category
Last pushed
Mar 07, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/anuragranj/cc"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
3DOM-FBK/deep-image-matching
Multiview matching with deep-learning and hand-crafted local features for COLMAP and other SfM...
suhangpro/mvcnn
Multi-view CNN (MVCNN) for shape recognition
zouchuhang/LayoutNet
Torch implementation of our CVPR 18 paper: "LayoutNet: Reconstructing the 3D Room Layout from a...
andyzeng/tsdf-fusion-python
Python code to fuse multiple RGB-D images into a TSDF voxel volume.
andyzeng/tsdf-fusion
Fuse multiple depth frames into a TSDF voxel volume.