ENSTA-U2IS-AI/infraParis

Multimodal & infrared automotive dataset. Published at WACV 2024 (Oral).

21
/ 100
Experimental

InfraParis provides a comprehensive collection of multimodal images, including standard RGB, depth, and infrared data, specifically for autonomous driving research. It helps engineers and researchers develop and test systems that can detect objects like pedestrians, understand scene layouts, and predict distances to objects. You would use this by inputting the image data into your autonomous driving algorithms and using the provided labels to train and evaluate their performance.

No commits in the last 6 months.

Use this if you are developing or evaluating algorithms for autonomous vehicles that need to process and understand visual information from multiple sensor types, especially in varying conditions.

Not ideal if your autonomous driving research focuses solely on simulation data, or if you require sensor modalities beyond RGB, depth, and infrared.

autonomous-driving sensor-fusion object-detection semantic-segmentation depth-estimation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 8 / 25

How are scores calculated?

Stars

9

Forks

1

Language

JavaScript

License

Last pushed

May 06, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/ENSTA-U2IS-AI/infraParis"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.