hardyqr/HAL

[AAAI'20] Code release for "HAL: Improved Text-Image Matching by Mitigating Visual Semantic Hubs".

25
/ 100
Experimental

This project offers an improved method for matching images with descriptive text, making it easier to find relevant visuals for specific textual content or vice versa. It takes a collection of images and their corresponding text descriptions, processes them, and outputs a refined model that can better understand the semantic relationship between visual and textual data. This is primarily useful for machine learning engineers or researchers working on tasks like image search, content recommendation, or multimodal understanding.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher looking to improve the accuracy and robustness of your text-to-image or image-to-text matching models, especially when dealing with large datasets like MS-COCO or Flickr30k.

Not ideal if you are an end-user without a technical background in machine learning or deep learning, as this project requires familiarity with Python, PyTorch, and model training workflows.

deep-learning computer-vision natural-language-processing multimodal-ai information-retrieval
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 10 / 25

How are scores calculated?

Stars

38

Forks

4

Language

Python

License

Last pushed

Oct 04, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/hardyqr/HAL"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.