kyegomez/PALM-E

Implementation of "PaLM-E: An Embodied Multimodal Language Model"

45
/ 100
Emerging

This project offers the foundational architecture for building AI models that can understand and respond to both visual information (like images from cameras) and text instructions. It takes in visual observations and natural language commands, enabling the model to process complex real-world scenarios. Robotics engineers, researchers, and AI developers working on embodied AI systems are the primary users.

335 stars. No commits in the last 6 months.

Use this if you are an AI researcher or robotics engineer looking to develop an embodied multimodal language model capable of understanding and executing tasks based on both visual input and text instructions.

Not ideal if you need a ready-to-use, pre-trained model for immediate inference or a system with a built-in tokenizer, as this project provides the architecture only.

robotics embodied-ai multi-modal-learning visual-language-processing autonomous-systems
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

335

Forks

50

Language

Python

License

Apache-2.0

Last pushed

Jan 29, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/PALM-E"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.