vijpandaturtle/hydranet-autonomous-driving

A multi-task learning algorithm for autonomous driving tasks

29
/ 100
Experimental

This project helps self-driving car engineers and researchers by simultaneously analyzing road scenes for object recognition and depth perception. It takes raw camera footage or images as input and generates two outputs: a segmented image outlining different elements like roads, pedestrians, and vehicles, along with a depth map showing how far away objects are. Autonomous vehicle perception specialists can use this to develop or improve their systems.

No commits in the last 6 months.

Use this if you need to perform both semantic segmentation (identifying objects) and depth estimation (measuring distances) on visual data for autonomous driving applications.

Not ideal if you require highly accurate, production-ready segmentation output, as the current segmentation results are noted as noisy.

autonomous-driving robotics-perception computer-vision vehicle-safety scene-understanding
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

12

Forks

6

Language

Jupyter Notebook

License

Last pushed

Nov 24, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/vijpandaturtle/hydranet-autonomous-driving"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.