InternScience/GraphGen
GraphGen: Enhancing Supervised Fine-Tuning for LLMs with Knowledge-Driven Synthetic Data Generation
GraphGen helps large language models (LLMs) learn more effectively by creating high-quality, targeted practice questions and answers. It takes existing text or documents, identifies knowledge gaps in the LLM, and generates new, diverse question-answer pairs to fill those gaps. This is useful for anyone looking to fine-tune an LLM to perform better on specific subjects or tasks.
978 stars. Actively maintained with 2 commits in the last 30 days.
Use this if you need to create specialized training data to improve an LLM's understanding and performance in a particular domain, especially when original training data is scarce or needs augmentation.
Not ideal if you don't work with large language models, or if you already have ample, high-quality, domain-specific question-answer data readily available for your fine-tuning needs.
Stars
978
Forks
79
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 11, 2026
Commits (30d)
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/InternScience/GraphGen"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
timothepearce/synda
A CLI for generating synthetic data
rasinmuhammed/misata
High-performance open-source synthetic data engine. Uses LLMs for schema design and vectorized...
ziegler-ingo/CRAFT
[TACL, EMNLP 2025 Oral] Code, datasets, and checkpoints for the paper "CRAFT Your Dataset:...
ZhuLinsen/FastDatasets
A powerful tool for creating high-quality training datasets for Large Language Models...
BatsResearch/bonito
A lightweight library for generating synthetic instruction tuning datasets for your data without GPT.