YJiangcm/FollowBench

[ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models

45
/ 100
Emerging

This project helps evaluate how well large language models (LLMs) follow complex instructions. It takes an LLM and a set of instructions with varying constraints (content, style, format, etc.) as input. The output is a detailed breakdown of how precisely the LLM satisfied each constraint and overall instruction, presented in easy-to-understand metrics. Developers and researchers building or integrating LLMs would use this to rigorously test their models' reliability.

119 stars. No commits in the last 6 months.

Use this if you need to systematically and precisely measure how well your large language model adheres to detailed, multi-level instructions and constraints.

Not ideal if you are a casual user of LLMs and simply want to generate creative text without needing to rigorously evaluate constraint adherence.

large-language-models model-evaluation natural-language-processing instruction-following AI-benchmarking
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

119

Forks

19

Language

Python

License

Apache-2.0

Last pushed

Jun 12, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/YJiangcm/FollowBench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.