CVMI-Lab/MarS3D
(CVPR 2023) MarS3D: A Plug-and-Play Motion-Aware Model for Semantic Segmentation on Multi-Scan 3D Point Clouds
This project helps self-driving car engineers and researchers improve how autonomous vehicles understand their surroundings. It takes in a series of 3D point cloud scans from LiDAR sensors and outputs a more accurate classification of objects and areas within those scans, even when things are moving. This results in better semantic segmentation for environmental perception systems.
No commits in the last 6 months.
Use this if you need to precisely identify and categorize objects in dynamic 3D environments, especially when dealing with data from multiple LiDAR scans over time.
Not ideal if you are working with single-scan 3D data or do not require enhanced motion awareness for semantic segmentation.
Stars
67
Forks
8
Language
Python
License
Apache-2.0
Category
Last pushed
Jul 31, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/CVMI-Lab/MarS3D"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
drprojects/superpoint_transformer
Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D...
yuxumin/PoinTr
[ICCV 2021 Oral] PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers
charlesq34/frustum-pointnets
Frustum PointNets for 3D Object Detection from RGB-D Data
drprojects/DeepViewAgg
[CVPR'22 Best Paper Finalist] Official PyTorch implementation of the method presented in...
facebookresearch/votenet
Deep Hough Voting for 3D Object Detection in Point Clouds