zjukg/OntoTune
[Paper][WWW2025] OntoTune: Ontology-Driven Self-training for Aligning Large Language Models
This helps developers fine-tune large language models (LLMs) to follow specific knowledge structures. It takes an existing LLM and an ontology (a formal representation of knowledge) as input. The output is a refined LLM that generates responses more consistent with the given ontology, which is useful for AI engineers or data scientists building specialized AI applications.
No commits in the last 6 months.
Use this if you need an LLM to generate responses that strictly adhere to a predefined knowledge graph or domain-specific terminology.
Not ideal if you are a non-developer seeking an out-of-the-box solution for general-purpose LLM improvements without custom training.
Stars
25
Forks
1
Language
Python
License
—
Category
Last pushed
Jul 21, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zjukg/OntoTune"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
RManLuo/reasoning-on-graphs
Official Implementation of ICLR 2024 paper: "Reasoning on Graphs: Faithful and Interpretable...
alibaba/GraphTranslator
GraphTranslator:Aligning Graph Model to Large Language Model for Open-ended Tasks
HKUDS/OpenGraph
[EMNLP'2024] "OpenGraph: Towards Open Graph Foundation Models"
HKUDS/GraphEdit
"GraphEdit: Large Language Models for Graph Structure Learning"
iMoonLab/LLM4Hypergraph
The source code of ICLR 2025 "Beyond Graphs: Can Large Language Models Comprehend Hypergraphs?"