zjukg/OntoTune

[Paper][WWW2025] OntoTune: Ontology-Driven Self-training for Aligning Large Language Models

21
/ 100
Experimental

This helps developers fine-tune large language models (LLMs) to follow specific knowledge structures. It takes an existing LLM and an ontology (a formal representation of knowledge) as input. The output is a refined LLM that generates responses more consistent with the given ontology, which is useful for AI engineers or data scientists building specialized AI applications.

No commits in the last 6 months.

Use this if you need an LLM to generate responses that strictly adhere to a predefined knowledge graph or domain-specific terminology.

Not ideal if you are a non-developer seeking an out-of-the-box solution for general-purpose LLM improvements without custom training.

LLM fine-tuning ontology alignment knowledge graph integration specialized AI development semantic AI
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 4 / 25

How are scores calculated?

Stars

25

Forks

1

Language

Python

License

Last pushed

Jul 21, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zjukg/OntoTune"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.