drprojects/DeepViewAgg
[CVPR'22 Best Paper Finalist] Official PyTorch implementation of the method presented in "Learning Multi-View Aggregation In the Wild for Large-Scale 3D Semantic Segmentation"
This project helps professionals accurately label large-scale 3D environments, like urban landscapes or indoor spaces, by combining raw 3D point clouds with standard 2D images. It takes raw 3D point cloud data along with corresponding images and their camera positions as input. The output is a precisely segmented 3D environment, where different objects and areas are clearly identified. This tool is ideal for urban planners, autonomous vehicle engineers, or architects working with complex 3D scans.
235 stars.
Use this if you need highly accurate 3D semantic segmentation for vast environments using both 3D point clouds and 2D images, without needing to manually colorize point clouds or use specialized depth cameras.
Not ideal if you only work with 3D data and don't have corresponding image sets, or if you don't have access to high-performance computing resources.
Stars
235
Forks
24
Language
Python
License
—
Category
Last pushed
Feb 11, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/drprojects/DeepViewAgg"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
drprojects/superpoint_transformer
Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D...
yuxumin/PoinTr
[ICCV 2021 Oral] PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers
charlesq34/frustum-pointnets
Frustum PointNets for 3D Object Detection from RGB-D Data
facebookresearch/votenet
Deep Hough Voting for 3D Object Detection in Point Clouds
Easonyesheng/A2PM-MESA
[CVPR'24 & TPAMI'26] Area to Point Matching Framework