Gary3410/TaPA
[arXiv 2023] Embodied Task Planning with Large Language Models
This project helps robotics engineers and researchers design step-by-step action plans for embodied robots. By providing RGB images from a scene and a human instruction, it generates a sequence of executable actions for the robot to navigate or manipulate objects. The system is designed for practitioners working with robotic systems that need to understand and act upon human commands in physical environments.
193 stars. No commits in the last 6 months.
Use this if you need to translate high-level human instructions into concrete, multi-step action plans for a robot operating in a physical space.
Not ideal if your robot only performs simple, pre-programmed tasks without needing to interpret complex human instructions or interact with varied environments.
Stars
193
Forks
13
Language
Python
License
—
Category
Last pushed
Aug 22, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Gary3410/TaPA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
TinyLLaVA/TinyLLaVA_Factory
A Framework of Small-scale Large Multimodal Models
zjunlp/EasyInstruct
[ACL 2024] An Easy-to-use Instruction Processing Framework for LLMs.
rese1f/MovieChat
[CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding
haotian-liu/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
NVlabs/Eagle
Eagle: Frontier Vision-Language Models with Data-Centric Strategies