chtmp223/suri
Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)
This project offers a comprehensive dataset and fine-tuning methods for training large language models to generate very long texts (2,000-5,000 words) while adhering to multiple specific instructions. It helps AI researchers and practitioners develop models that produce controlled, extended written content, taking multi-constraint prompts as input and yielding detailed, long-form text as output.
No commits in the last 6 months.
Use this if you are an AI researcher or machine learning engineer looking to train or fine-tune large language models for generating very long-form text that strictly follows complex, multi-part instructions.
Not ideal if you are looking for an off-the-shelf application to generate short, simple texts or if you don't have experience with model training and fine-tuning.
Stars
27
Forks
1
Language
Python
License
—
Category
Last pushed
Oct 03, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/chtmp223/suri"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
uds-lsv/bert-stable-fine-tuning
On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines
MeryylleA/lunariscodex
A high-performance PyTorch toolkit for pre-training modern, Llama-style language models. Based...
VanekPetr/flan-t5-text-classifier
Fine-tuning of Flan-5T LLM for text classification 🤖 focuses on adapting a state-of-the-art...
kingTLE/literary-alpaca2
从词表到微调这就是你所需的一切
YuweiYin/HLT-MT
[IJCAI-ECAI 2022] HLT-MT: High-resource Language-specific Training for Multilingual Neural...