ExcelsiorCJH/CLIP

CLIP: Learning Transferable Visual Models From Natural Language Supervision

27
/ 100
Experimental

This project helps you connect images with their natural language descriptions, enabling powerful search and classification. You provide a dataset of images paired with text, and it produces a model capable of understanding visual concepts from text. Anyone working with large image collections for tasks like content moderation, stock photography tagging, or visual search could use this.

No commits in the last 6 months.

Use this if you need to build a system that can find images based on text descriptions or categorize images using natural language, without extensive manual tagging.

Not ideal if you're looking for a ready-to-use application and don't want to engage in setting up and training a machine learning model.

image-search content-tagging visual-classification digital-asset-management computer-vision-research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

7

Forks

4

Language

Jupyter Notebook

License

Last pushed

Feb 04, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ExcelsiorCJH/CLIP"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.