kyegomez/PALM-E
Implementation of "PaLM-E: An Embodied Multimodal Language Model"
This project offers the foundational architecture for building AI models that can understand and respond to both visual information (like images from cameras) and text instructions. It takes in visual observations and natural language commands, enabling the model to process complex real-world scenarios. Robotics engineers, researchers, and AI developers working on embodied AI systems are the primary users.
335 stars. No commits in the last 6 months.
Use this if you are an AI researcher or robotics engineer looking to develop an embodied multimodal language model capable of understanding and executing tasks based on both visual input and text instructions.
Not ideal if you need a ready-to-use, pre-trained model for immediate inference or a system with a built-in tokenizer, as this project provides the architecture only.
Stars
335
Forks
50
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 29, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/PALM-E"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kyegomez/RT-X
Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment:...
kyegomez/PALI3
Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"
chuanyangjin/MMToM-QA
[🏆Outstanding Paper Award at ACL 2024] MMToM-QA: Multimodal Theory of Mind Question Answering
lyuchenyang/Macaw-LLM
Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration
Muennighoff/vilio
🥶Vilio: State-of-the-art VL models in PyTorch & PaddlePaddle