alexander-moore/vlm
Composition of Multimodal Language Models From Scratch
This project helps AI researchers and machine learning engineers explore how to combine existing large language models with image encoders to create new multimodal AI systems. It takes a pre-trained LLM and an image encoder, adds an adapter module, and trains this new component to allow the LLM to understand visual information without retraining the core LLM. Researchers focused on advancing multimodal AI capabilities would use this to build and experiment with novel vision-language models.
No commits in the last 6 months.
Use this if you are an AI researcher or machine learning engineer looking to build and experiment with multimodal large language models from scratch, specifically focusing on integrating visual understanding into existing LLM architectures.
Not ideal if you are an end-user seeking a pre-built, ready-to-use multimodal AI application or if you are not deeply involved in foundational AI model development.
Stars
15
Forks
2
Language
Jupyter Notebook
License
—
Category
Last pushed
Aug 16, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/alexander-moore/vlm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model