anshulpaigwar/Frustum-Pointpillars

Frustum-PointPillars: A Multi-Stage Approach for 3D Object Detection using RGB Camera and LiDAR

42
/ 100
Emerging

This project helps autonomous vehicle engineers improve how self-driving cars 'see' their surroundings in 3D. It takes raw data from the car's RGB cameras and LiDAR sensors and precisely identifies objects like pedestrians and cars in 3D space. Autonomous vehicle perception engineers would use this to enhance the car's ability to understand its environment for better decision-making and path planning.

No commits in the last 6 months.

Use this if you are working on autonomous vehicle perception and need to accurately detect 3D objects, especially small ones like pedestrians, using both camera and LiDAR data.

Not ideal if your application doesn't involve autonomous vehicles, 3D object detection, or relies solely on 2D image data without LiDAR.

autonomous-driving 3D-object-detection vehicle-perception LiDAR-processing robotics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

61

Forks

15

Language

Python

License

MIT

Last pushed

May 04, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/anshulpaigwar/Frustum-Pointpillars"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.