SkalskiP/awesome-foundation-and-multimodal-models
👁️ + 💬 + 🎧 = 🤖 Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]
This is a curated list of advanced artificial intelligence models that can understand and process different types of information, like images, text, and audio, simultaneously. It provides a directory of recent research papers, code, and examples for various foundation and multimodal models. Anyone interested in exploring or applying cutting-edge AI for tasks involving diverse data types, such as visual question answering or object detection, would find this useful.
638 stars. No commits in the last 6 months.
Use this if you need to find state-of-the-art AI models that can interpret and combine information from sources like images, text, or audio for a variety of tasks.
Not ideal if you are looking for ready-to-use, deployable applications rather than a collection of research models and their resources.
Stars
638
Forks
45
Language
Python
License
—
Category
Last pushed
Feb 29, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/SkalskiP/awesome-foundation-and-multimodal-models"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
TheShadow29/awesome-grounding
awesome grounding: A curated list of research papers in visual grounding
microsoft/XPretrain
Multi-modality pre-training
TheShadow29/zsgnet-pytorch
Official implementation of ICCV19 oral paper Zero-Shot grounding of Objects from Natural...
TheShadow29/VidSitu
[CVPR21] Visual Semantic Role Labeling for Video Understanding (https://arxiv.org/abs/2104.00990)
zeyofu/BLINK_Benchmark
This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can...