SunghwanHong/Cost-Aggregation-transformers

Official implementation of CATs

37
/ 100
Emerging

This project helps computer vision researchers and engineers accurately find corresponding points between two different images, even if objects are distorted or viewed from new angles. You input two images, and it outputs a precise mapping of points showing how parts of the objects relate to each other. This is ideal for anyone working with visual data that requires understanding relationships between varying image perspectives or forms.

134 stars. No commits in the last 6 months.

Use this if you need to establish highly accurate semantic correspondence between objects in different images, especially when those objects might vary significantly in appearance or pose.

Not ideal if you are looking for a simple object detection or classification tool, as this focuses specifically on detailed pixel-level correspondence.

computer-vision image-matching feature-correspondence semantic-alignment visual-recognition
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

134

Forks

11

Language

Python

License

GPL-3.0

Last pushed

Jan 10, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/SunghwanHong/Cost-Aggregation-transformers"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.