lil-lab/ciff
Cornell Instruction Following Framework
This framework helps AI researchers and developers working on instruction-following agents. It provides a standardized way to test and compare how well AI agents follow natural language commands across different simulated environments like block manipulation, 3D navigation, or street-view navigation. It takes natural language instructions and outputs an agent's actions within the simulator, helping evaluate and benchmark agent performance.
No commits in the last 6 months.
Use this if you are an AI researcher or developer building and evaluating agents that need to understand and act upon human instructions in diverse simulated environments.
Not ideal if you are looking for a ready-to-use instruction-following agent for a real-world application, as this is a research framework for development and evaluation.
Stars
34
Forks
6
Language
Python
License
GPL-3.0
Category
Last pushed
Oct 11, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/lil-lab/ciff"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Marktechpost/AI-Tutorial-Codes-Included
Codes/Notebooks for AI Projects
microsoft/AI-For-Beginners
12 Weeks, 24 Lessons, AI for All!
airbus/scikit-decide
AI framework for Reinforcement Learning, Automated Planning and Scheduling
nearai/program_synthesis
Program Synthesis
papagiannakis/Elements
Project Elements: A computational entity-component-system in a scene-graph pythonic framework,...