lechmazur/writing_styles
Documents the style side of the short-story Creative Writing LLM benchmark: we generated many short stories with a range of LLMs, then analyzed those stories for stylistic fingerprints and within-model diversity. This study focuses on how models write, how their outputs differ, and how varied each model is across its own stories.
This project helps content creators, marketers, and writers understand the stylistic range and tendencies of different large language models (LLMs). It analyzes short stories generated by various LLMs to show what kind of writing style each model typically produces and how diverse its outputs are. You can use this to select an LLM that best fits your creative writing or content generation needs.
Use this if you are a creative writer, content strategist, or marketer who uses LLMs and needs to understand their inherent stylistic 'voice' and how much variety you can expect from their outputs without heavy prompting.
Not ideal if you are looking for a benchmark of factual accuracy, reasoning ability, or code generation quality from LLMs, as this focuses exclusively on creative writing style.
Stars
22
Forks
2
Language
—
License
—
Category
Last pushed
Dec 18, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/lechmazur/writing_styles"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NVIDIA-NeMo/Curator
Scalable data pre processing and curation toolkit for LLMs
MigoXLab/dingo
Dingo: A Comprehensive AI Data, Model and Application Quality Evaluation Tool
data-prep-kit/data-prep-kit
Open source project for data preparation for GenAI applications
TheDataStation/pneuma
LLM-Powered Data Discovery System for Tabular Data
cleanlab/cleanlab-studio
Client interface to Cleanlab Studio