charlesq34/frustum-pointnets

Frustum PointNets for 3D Object Detection from RGB-D Data

51
/ 100
Established

This project helps self-driving car engineers and robotics researchers identify and locate objects like cars, pedestrians, and cyclists in 3D space. It takes camera images and 3D point cloud data (like from LiDAR) as input and outputs precise 3D bounding boxes around detected objects. Anyone developing autonomous systems that need to 'see' and understand their environment in three dimensions would use this.

1,659 stars. No commits in the last 6 months.

Use this if you need to accurately detect and locate various objects in 3D from combined camera and depth sensor data for autonomous navigation or scene understanding.

Not ideal if you only have standard 2D images and no 3D depth data, or if you primarily need object detection for static, non-real-time analysis.

autonomous-vehicles robotics 3d-scene-perception object-localization sensor-fusion
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

1,659

Forks

533

Language

Python

License

Apache-2.0

Last pushed

Mar 24, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/charlesq34/frustum-pointnets"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.