ZhuLinsen/FastDatasets
A powerful tool for creating high-quality training datasets for Large Language Models (LLMs)(一个快速生成高质量LLM微调训练数据集的工具)
This tool helps AI engineers and researchers quickly create high-quality training datasets for Large Language Models (LLMs). You provide documents like PDFs, Word files, or plain text, and it automatically generates relevant questions and answers in formats like Alpaca or ShareGPT, ready for fine-tuning your LLM. It also offers features for knowledge distillation and data quality optimization.
183 stars. No commits in the last 6 months.
Use this if you need to efficiently transform your existing textual content into structured question-answer pairs to train or fine-tune a custom Large Language Model.
Not ideal if you are looking for a tool to train an LLM from scratch or for general-purpose data annotation outside of LLM fine-tuning.
Stars
183
Forks
28
Language
Python
License
Apache-2.0
Category
Last pushed
Aug 31, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ZhuLinsen/FastDatasets"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
InternScience/GraphGen
GraphGen: Enhancing Supervised Fine-Tuning for LLMs with Knowledge-Driven Synthetic Data Generation
timothepearce/synda
A CLI for generating synthetic data
rasinmuhammed/misata
High-performance open-source synthetic data engine. Uses LLMs for schema design and vectorized...
ziegler-ingo/CRAFT
[TACL, EMNLP 2025 Oral] Code, datasets, and checkpoints for the paper "CRAFT Your Dataset:...
BatsResearch/bonito
A lightweight library for generating synthetic instruction tuning datasets for your data without GPT.