SufyanDanish/VLM-Survey-
A comprehensive survey of Vision–Language Models: Pretrained models, fine-tuning, prompt engineering, adapters, and benchmark datasets
This resource collects and organizes published research on Vision-Language Models (VLMs), which are AI systems that understand both images and text. It provides an overview of various techniques like fine-tuning, prompt engineering, and adapter modules to improve VLM performance. Researchers and practitioners in AI and machine learning fields would use this to understand current trends and challenges in optimizing VLMs for real-world applications such as image captioning, visual question answering, and multimodal retrieval.
No commits in the last 6 months.
Use this if you are a researcher or AI practitioner looking for a consolidated reference of techniques and models to optimize Vision-Language Models for specific tasks, especially focusing on computational efficiency and performance.
Not ideal if you are looking for an implementation-ready library or a step-by-step tutorial for building your own VLM from scratch.
Stars
9
Forks
—
Language
—
License
—
Category
Last pushed
Sep 04, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/SufyanDanish/VLM-Survey-"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ShiZhengyan/PowerfulPromptFT
[NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining?...
OpenDriveLab/DriveLM
[ECCV 2024 Oral] DriveLM: Driving with Graph Visual Question Answering
MILVLG/prophet
Implementation of CVPR 2023 paper "Prompting Large Language Models with Answer Heuristics for...
deepankar27/Prompt_Organizer
Managed Prompt Engineering
mala-lab/NegPrompt
The official implementation of CVPR 24' Paper "Learning Transferable Negative Prompts for...