SuyogKamble/simpleVLM
building a simple VLM. Implementing LlaMA-SmolLM2 from scratch + SigLip2 Vision Model. KV-Caching is supported and implemented from scratch as well
This is a developer-focused project that helps AI researchers and deep learning engineers understand and experiment with Vision-Language Models (VLMs). It takes in images and text, processes them, and outputs textual descriptions or insights by combining visual and linguistic information. This tool is for individuals building or studying multimodal AI systems.
Use this if you are an AI researcher or deep learning engineer who needs a clear, modular, and 'from-scratch' implementation of a VLM to learn from, customize, or integrate into your own projects.
Not ideal if you are an end-user looking for a ready-to-use application or API that performs vision-language tasks without requiring deep technical understanding or development work.
Stars
7
Forks
4
Language
Jupyter Notebook
License
—
Category
Last pushed
Feb 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/SuyogKamble/simpleVLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kyegomez/RT-X
Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment:...
kyegomez/PALI3
Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"
chuanyangjin/MMToM-QA
[🏆Outstanding Paper Award at ACL 2024] MMToM-QA: Multimodal Theory of Mind Question Answering
lyuchenyang/Macaw-LLM
Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration
Muennighoff/vilio
🥶Vilio: State-of-the-art VL models in PyTorch & PaddlePaddle