ENSTA-U2IS-AI/infraParis
Multimodal & infrared automotive dataset. Published at WACV 2024 (Oral).
InfraParis provides a comprehensive collection of multimodal images, including standard RGB, depth, and infrared data, specifically for autonomous driving research. It helps engineers and researchers develop and test systems that can detect objects like pedestrians, understand scene layouts, and predict distances to objects. You would use this by inputting the image data into your autonomous driving algorithms and using the provided labels to train and evaluate their performance.
No commits in the last 6 months.
Use this if you are developing or evaluating algorithms for autonomous vehicles that need to process and understand visual information from multiple sensor types, especially in varying conditions.
Not ideal if your autonomous driving research focuses solely on simulation data, or if you require sensor modalities beyond RGB, depth, and infrared.
Stars
9
Forks
1
Language
JavaScript
License
—
Category
Last pushed
May 06, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/ENSTA-U2IS-AI/infraParis"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
3DOM-FBK/deep-image-matching
Multiview matching with deep-learning and hand-crafted local features for COLMAP and other SfM...
suhangpro/mvcnn
Multi-view CNN (MVCNN) for shape recognition
zouchuhang/LayoutNet
Torch implementation of our CVPR 18 paper: "LayoutNet: Reconstructing the 3D Room Layout from a...
andyzeng/tsdf-fusion-python
Python code to fuse multiple RGB-D images into a TSDF voxel volume.
andyzeng/tsdf-fusion
Fuse multiple depth frames into a TSDF voxel volume.