real-stanford/reflect
[CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction
This framework helps robotics engineers understand and fix why their robots fail during tasks. It takes multimodal sensor data (like visuals, sound, and robot states) and the robot's plan to create a detailed summary of its experience. The output is a clear explanation of what went wrong and a corrected plan, enabling robot operators to efficiently get their systems back on track.
103 stars. No commits in the last 6 months.
Use this if you need to diagnose and correct failures in your robotic systems, whether they are execution errors or planning mistakes across various tasks.
Not ideal if you are looking for a pre-trained robot policy or a tool for general robot safety or efficiency analysis without a focus on failure correction.
Stars
103
Forks
9
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Mar 12, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/real-stanford/reflect"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
xrsrke/toolformer
Implementation of Toolformer: Language Models Can Teach Themselves to Use Tools
MozerWang/AMPO
[ICLR 2026] Adaptive Social Learning via Mode Policy Optimization for Language Agents
nsidn98/LLaMAR
Code for our paper LLaMAR: LM-based Long-Horizon Planner for Multi-Agent Robotics
BatsResearch/planetarium
Dataset and benchmark for assessing LLMs in translating natural language descriptions of...
WayneMao/RoboMatrix
The Official Implementation of RoboMatrix