fblgit/tree-of-knowledge
ToK aka Tree of Knowledge for Large Language Models LLM. It's a novel dataset that inspires knowledge symbolic correlation in simple input and output prompts
This project offers a specialized dataset designed to improve how large language models (LLMs) understand and connect ideas. It takes natural language prompts and translates them into a condensed, symbolic format that highlights relationships and core concepts. Anyone working with LLMs who needs them to reason more effectively, extract knowledge, or fine-tune their understanding of specific domains would find this useful.
No commits in the last 6 months.
Use this if you need to train or fine-tune an LLM to perform better at logical reasoning, extract precise knowledge, or understand complex relationships within text.
Not ideal if your primary goal is general conversational AI or if you're not working directly with large language models.
Stars
56
Forks
2
Language
—
License
GPL-3.0
Category
Last pushed
Jun 19, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/fblgit/tree-of-knowledge"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-thought/reasoning-gym
[NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards
Hmbown/Hegelion
Dialectical reasoning architecture for LLMs (Thesis → Antithesis → Synthesis)
LLM360/Reasoning360
A repo for open research on building large reasoning models
TsinghuaC3I/Awesome-RL-for-LRMs
A Survey of Reinforcement Learning for Large Reasoning Models
bowang-lab/BioReason
BioReason: Incentivizing Multimodal Biological Reasoning within a DNA-LLM Model | NeurIPS '25