lil-lab/ciff

Cornell Instruction Following Framework

38
/ 100
Emerging

This framework helps AI researchers and developers working on instruction-following agents. It provides a standardized way to test and compare how well AI agents follow natural language commands across different simulated environments like block manipulation, 3D navigation, or street-view navigation. It takes natural language instructions and outputs an agent's actions within the simulator, helping evaluate and benchmark agent performance.

No commits in the last 6 months.

Use this if you are an AI researcher or developer building and evaluating agents that need to understand and act upon human instructions in diverse simulated environments.

Not ideal if you are looking for a ready-to-use instruction-following agent for a real-world application, as this is a research framework for development and evaluation.

AI research Natural Language Understanding Robotics Simulation Agent Development Machine Learning Experimentation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

34

Forks

6

Language

Python

License

GPL-3.0

Last pushed

Oct 11, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/lil-lab/ciff"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.