InternScience/GraphGen

GraphGen: Enhancing Supervised Fine-Tuning for LLMs with Knowledge-Driven Synthetic Data Generation

56
/ 100
Established

GraphGen helps large language models (LLMs) learn more effectively by creating high-quality, targeted practice questions and answers. It takes existing text or documents, identifies knowledge gaps in the LLM, and generates new, diverse question-answer pairs to fill those gaps. This is useful for anyone looking to fine-tune an LLM to perform better on specific subjects or tasks.

978 stars. Actively maintained with 2 commits in the last 30 days.

Use this if you need to create specialized training data to improve an LLM's understanding and performance in a particular domain, especially when original training data is scarce or needs augmentation.

Not ideal if you don't work with large language models, or if you already have ample, high-quality, domain-specific question-answer data readily available for your fine-tuning needs.

LLM fine-tuning synthetic data generation AI training knowledge graph educational content creation
No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

978

Forks

79

Language

Python

License

Apache-2.0

Last pushed

Mar 11, 2026

Commits (30d)

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/InternScience/GraphGen"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.