sunnyHelen/JPerceiver

[ECCV 2022]JPerceiver: Joint Perception Network for Depth, Pose and Layout Estimation in Driving Scenes

30
/ 100
Emerging

This project helps autonomous driving engineers and researchers accurately perceive complex driving scenes. By taking a monocular video sequence as input, it simultaneously estimates depth, vehicle motion (visual odometry), and the bird's-eye-view layout of roads and vehicles. The output provides a more consistent and scale-aware understanding of the environment, crucial for motion planning.

No commits in the last 6 months.

Use this if you need to extract precise, scale-aware depth, vehicle pose, and road/vehicle layout information simultaneously from single camera video for autonomous driving applications.

Not ideal if your application does not involve autonomous driving scenarios or if you only need to perform one of these perception tasks in isolation.

autonomous-driving scene-perception robot-navigation computer-vision motion-planning
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 13 / 25

How are scores calculated?

Stars

79

Forks

10

Language

Python

License

Last pushed

Nov 04, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/sunnyHelen/JPerceiver"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.