drprojects/DeepViewAgg

[CVPR'22 Best Paper Finalist] Official PyTorch implementation of the method presented in "Learning Multi-View Aggregation In the Wild for Large-Scale 3D Semantic Segmentation"

51
/ 100
Established

This project helps professionals accurately label large-scale 3D environments, like urban landscapes or indoor spaces, by combining raw 3D point clouds with standard 2D images. It takes raw 3D point cloud data along with corresponding images and their camera positions as input. The output is a precisely segmented 3D environment, where different objects and areas are clearly identified. This tool is ideal for urban planners, autonomous vehicle engineers, or architects working with complex 3D scans.

235 stars.

Use this if you need highly accurate 3D semantic segmentation for vast environments using both 3D point clouds and 2D images, without needing to manually colorize point clouds or use specialized depth cameras.

Not ideal if you only work with 3D data and don't have corresponding image sets, or if you don't have access to high-performance computing resources.

3D-mapping urban-planning autonomous-driving architectural-scanning environmental-modeling
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

235

Forks

24

Language

Python

License

Last pushed

Feb 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/drprojects/DeepViewAgg"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.