AIDC-AI/Wings
The code repository for "Wings: Learning Multimodal LLMs without Text-only Forgetting" [NeurIPS 2024]
Wings helps AI researchers and developers create advanced Multimodal Large Language Models (MLLMs) that can understand both text and images, without losing their ability to process text-only instructions. It takes an existing LLM and multimodal inputs, and outputs an MLLM capable of excelling in both visual question answering and pure text-based dialogues. This is for AI practitioners building next-generation AI assistants or intelligent systems.
No commits in the last 6 months.
Use this if you are developing multimodal AI models and need to overcome the common problem of "text-only forgetting" where models lose their proficiency in text understanding after being trained on images.
Not ideal if you are looking for an off-the-shelf application or a pre-trained model for direct end-user interaction without further development.
Stars
26
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Dec 28, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AIDC-AI/Wings"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kyegomez/RT-X
Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment:...
kyegomez/PALI3
Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"
chuanyangjin/MMToM-QA
[🏆Outstanding Paper Award at ACL 2024] MMToM-QA: Multimodal Theory of Mind Question Answering
lyuchenyang/Macaw-LLM
Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration
Muennighoff/vilio
🥶Vilio: State-of-the-art VL models in PyTorch & PaddlePaddle