heitorrapela/ModTr

[ECCV2024] ModTr: Modality Translation for Object Detection Adaptation Without Forgetting Prior Knowledge

27
/ 100
Experimental

This project helps computer vision engineers adapt object detection models to new types of sensor data, like infrared images, without losing the model's ability to detect objects in its original data type, such as standard RGB images. It takes an existing object detection model and new sensor data, then produces an adapted model that performs well across both modalities. This is for AI/ML engineers working with multi-modal sensor data in applications like surveillance or autonomous systems.

No commits in the last 6 months.

Use this if you need to train an object detection model to work with a new sensor modality (e.g., infrared) while retaining its strong performance on previously learned modalities (e.g., visible light).

Not ideal if you are looking for an out-of-the-box object detection solution that doesn't require adapting models between different sensor types.

multi-modal sensing object detection adaptation computer vision engineering sensor fusion deep learning fine-tuning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 5 / 25

How are scores calculated?

Stars

19

Forks

1

Language

Python

License

MIT

Last pushed

Nov 28, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/heitorrapela/ModTr"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.