CVMI-Lab/MarS3D

(CVPR 2023) MarS3D: A Plug-and-Play Motion-Aware Model for Semantic Segmentation on Multi-Scan 3D Point Clouds

37
/ 100
Emerging

This project helps self-driving car engineers and researchers improve how autonomous vehicles understand their surroundings. It takes in a series of 3D point cloud scans from LiDAR sensors and outputs a more accurate classification of objects and areas within those scans, even when things are moving. This results in better semantic segmentation for environmental perception systems.

No commits in the last 6 months.

Use this if you need to precisely identify and categorize objects in dynamic 3D environments, especially when dealing with data from multiple LiDAR scans over time.

Not ideal if you are working with single-scan 3D data or do not require enhanced motion awareness for semantic segmentation.

autonomous-driving lidar-perception 3d-scene-understanding environmental-modeling
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

67

Forks

8

Language

Python

License

Apache-2.0

Last pushed

Jul 31, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/CVMI-Lab/MarS3D"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.