AviSoori1x/seemore
From scratch implementation of a vision language model in pure PyTorch
This is a detailed, from-scratch implementation of a vision language model (VLM) in PyTorch. It takes an image and a text prompt as input and generates human-like text outputs, similar to how advanced AI models understand both images and text. It's designed for machine learning researchers, students, or practitioners who want to deeply understand how these multimodal AI models work by building one from its fundamental components.
255 stars. No commits in the last 6 months.
Use this if you are a machine learning researcher or student who wants to learn the foundational principles of vision language models by examining a complete, transparent, and hackable implementation.
Not ideal if you are looking for an off-the-shelf, production-ready vision language model for immediate application, as this project prioritizes educational value and readability over performance.
Stars
255
Forks
31
Language
Jupyter Notebook
License
MIT
Category
Last pushed
May 06, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AviSoori1x/seemore"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
AI-Hypercomputer/maxtext
A simple, performant and scalable Jax LLM!
rasbt/reasoning-from-scratch
Implement a reasoning LLM in PyTorch from scratch, step by step
mindspore-lab/mindnlp
MindSpore + 🤗Huggingface: Run any Transformers/Diffusers model on MindSpore with seamless...
mosaicml/llm-foundry
LLM training code for Databricks foundation models
rickiepark/llm-from-scratch
<밑바닥부터 만들면서 공부하는 LLM>(길벗, 2025)의 코드 저장소