hardyqr/HAL
[AAAI'20] Code release for "HAL: Improved Text-Image Matching by Mitigating Visual Semantic Hubs".
This project offers an improved method for matching images with descriptive text, making it easier to find relevant visuals for specific textual content or vice versa. It takes a collection of images and their corresponding text descriptions, processes them, and outputs a refined model that can better understand the semantic relationship between visual and textual data. This is primarily useful for machine learning engineers or researchers working on tasks like image search, content recommendation, or multimodal understanding.
No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher looking to improve the accuracy and robustness of your text-to-image or image-to-text matching models, especially when dealing with large datasets like MS-COCO or Flickr30k.
Not ideal if you are an end-user without a technical background in machine learning or deep learning, as this project requires familiarity with Python, PyTorch, and model training workflows.
Stars
38
Forks
4
Language
Python
License
—
Category
Last pushed
Oct 04, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/hardyqr/HAL"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MediaTek-NeuroPilot/mai21-learned-smartphone-isp
The official codebase for the Learned Smartphone ISP Challenge in MAI @ CVPR 2021
ashishpatel26/365-Days-Computer-Vision-Learning-Linkedin-Post
365 Days Computer Vision Learning Linkedin Post
amusi/daily-paper-computer-vision
记录每天整理的计算机视觉/深度学习/机器学习相关方向的论文
extreme-assistant/ICCV2023-Paper-Code-Interpretation
ICCV2021/2019/2017 论文/代码/解读/直播合集,极市团队整理
extreme-assistant/survey-computer-vision-2020
2020-2021年计算机视觉综述论文分方向整理