sunnyHelen/JPerceiver
[ECCV 2022]JPerceiver: Joint Perception Network for Depth, Pose and Layout Estimation in Driving Scenes
This project helps autonomous driving engineers and researchers accurately perceive complex driving scenes. By taking a monocular video sequence as input, it simultaneously estimates depth, vehicle motion (visual odometry), and the bird's-eye-view layout of roads and vehicles. The output provides a more consistent and scale-aware understanding of the environment, crucial for motion planning.
No commits in the last 6 months.
Use this if you need to extract precise, scale-aware depth, vehicle pose, and road/vehicle layout information simultaneously from single camera video for autonomous driving applications.
Not ideal if your application does not involve autonomous driving scenarios or if you only need to perform one of these perception tasks in isolation.
Stars
79
Forks
10
Language
Python
License
—
Category
Last pushed
Nov 04, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/sunnyHelen/JPerceiver"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
changh95/visual-slam-roadmap
Roadmap to become a Visual-SLAM developer in 2026
coperception/coperception
An SDK for multi-agent collaborative perception.
w111liang222/lidar-slam-detection
LSD (LiDAR SLAM & Detection) is an open source perception architecture for autonomous vehicle/robotic
ika-rwth-aachen/Cam2BEV
TensorFlow Implementation for Computing a Semantically Segmented Bird's Eye View (BEV) Image...
lvchuandong/Awesome-Multi-Camera-3D-Occupancy-Prediction
Awesome papers and code about Multi-Camera 3D Occupancy Prediction, such as TPVFormer,...