heitorrapela/ModTr
[ECCV2024] ModTr: Modality Translation for Object Detection Adaptation Without Forgetting Prior Knowledge
This project helps computer vision engineers adapt object detection models to new types of sensor data, like infrared images, without losing the model's ability to detect objects in its original data type, such as standard RGB images. It takes an existing object detection model and new sensor data, then produces an adapted model that performs well across both modalities. This is for AI/ML engineers working with multi-modal sensor data in applications like surveillance or autonomous systems.
No commits in the last 6 months.
Use this if you need to train an object detection model to work with a new sensor modality (e.g., infrared) while retaining its strong performance on previously learned modalities (e.g., visible light).
Not ideal if you are looking for an out-of-the-box object detection solution that doesn't require adapting models between different sensor types.
Stars
19
Forks
1
Language
Python
License
MIT
Category
Last pushed
Nov 28, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/heitorrapela/ModTr"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mahmoudnafifi/C5
Reference code for the paper "Cross-Camera Convolutional Color Constancy" (ICCV 2021)
YigitGunduc/vit-gan
paper: https://arxiv.org/abs/2110.09305
howardyclo/CLCC-CVPR21
An official TensorFlow implementation of “CLCC: Contrastive Learning for Color Constancy”...
lorenzobloise/transmission_tower_electrical_cable_instance_segmentation
This repository contains the code used to train and test a Mask R-CNN model for instance...