X-PLUG/WritingBench
WritingBench: A Comprehensive Benchmark for Generative Writing
This project helps evaluate how well large language models (LLMs) can write for different real-world scenarios. It takes a model's generated text for specific prompts and assesses it against detailed criteria, providing a score. Anyone who develops, researches, or uses generative AI for content creation, from marketers to academics, would benefit from understanding an LLM's writing proficiency.
163 stars.
Use this if you need to objectively measure and compare the quality of text generated by different AI models across diverse writing tasks and domains.
Not ideal if you're looking for a tool to help you personally write better, as this is for evaluating AI writing models, not assisting human writers.
Stars
163
Forks
17
Language
Python
License
Apache-2.0
Category
Last pushed
Dec 19, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/X-PLUG/WritingBench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems