charlesq34/frustum-pointnets
Frustum PointNets for 3D Object Detection from RGB-D Data
This project helps self-driving car engineers and robotics researchers identify and locate objects like cars, pedestrians, and cyclists in 3D space. It takes camera images and 3D point cloud data (like from LiDAR) as input and outputs precise 3D bounding boxes around detected objects. Anyone developing autonomous systems that need to 'see' and understand their environment in three dimensions would use this.
1,659 stars. No commits in the last 6 months.
Use this if you need to accurately detect and locate various objects in 3D from combined camera and depth sensor data for autonomous navigation or scene understanding.
Not ideal if you only have standard 2D images and no 3D depth data, or if you primarily need object detection for static, non-real-time analysis.
Stars
1,659
Forks
533
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 24, 2020
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/charlesq34/frustum-pointnets"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
drprojects/superpoint_transformer
Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D...
yuxumin/PoinTr
[ICCV 2021 Oral] PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers
drprojects/DeepViewAgg
[CVPR'22 Best Paper Finalist] Official PyTorch implementation of the method presented in...
facebookresearch/votenet
Deep Hough Voting for 3D Object Detection in Point Clouds
Easonyesheng/A2PM-MESA
[CVPR'24 & TPAMI'26] Area to Point Matching Framework