madibabaiasl/MobileRobotGPT4LLaMA2024
Deployment of Large Language Models to Control Mobile Robots at the Edge
This project helps robotics researchers and developers give natural language voice commands to control mobile robots. It takes spoken instructions as input and translates them into actions for a smart robot car, allowing for intuitive, hands-free operation. This is designed for those working on human-robot interaction or prototyping robot control systems.
Use this if you want to control a mobile robot with spoken natural language commands, without needing a constant internet connection for the language model.
Not ideal if you need to control complex industrial robots or require extremely high precision and real-time response for safety-critical applications.
Stars
11
Forks
1
Language
Python
License
—
Category
Last pushed
Jan 31, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/madibabaiasl/MobileRobotGPT4LLaMA2024"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
TinyLLaVA/TinyLLaVA_Factory
A Framework of Small-scale Large Multimodal Models
zjunlp/EasyInstruct
[ACL 2024] An Easy-to-use Instruction Processing Framework for LLMs.
rese1f/MovieChat
[CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding
haotian-liu/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
NVlabs/Eagle
Eagle: Frontier Vision-Language Models with Data-Centric Strategies