darrenjkt/SEE-MTDA

(RA-L 2022) See Eye to Eye: A Lidar-Agnostic 3D Detection Framework for Unsupervised Multi-Target Domain Adaptation.

39
/ 100
Emerging

This project helps automotive engineers and robotics developers adapt 3D object detection systems to work with various lidar sensors without extensive retraining. It takes raw lidar data from different manufacturers and models, processes it to normalize inconsistencies, and then outputs highly accurate 3D bounding box detections for objects like cars and pedestrians. This is for professionals building autonomous vehicles or advanced robotics systems that rely on precise environmental sensing.

No commits in the last 6 months.

Use this if you need to deploy a 3D object detection system across vehicles or robots equipped with different lidar sensors and want to avoid time-consuming model fine-tuning for each new sensor.

Not ideal if your application uses only one specific lidar sensor type and does not require adaptability across varied lidar hardware.

autonomous-driving robotics lidar-sensing 3d-object-detection sensor-fusion
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

51

Forks

8

Language

Python

License

Apache-2.0

Last pushed

Feb 26, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/darrenjkt/SEE-MTDA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.