PeterGriffinJin/Patton

Patton: Language Model Pretraining on Text-rich Networks (ACL 2023 main oral)

29
/ 100
Experimental

This is a framework for researchers and practitioners working with complex datasets where entities (like research papers, products, or users) are connected and also have associated text. It takes these 'text-rich networks' as input and produces specialized language models. These models can then be used to perform tasks like classifying entities, finding relevant information, or predicting new connections.

No commits in the last 6 months.

Use this if you need to pretrain a language model to better understand and leverage both the textual content and the relationships within your networked data, such as scientific citation networks or product recommendation graphs.

Not ideal if your data lacks explicit network structures or if your primary goal is to train a general-purpose language model without considering inter-entity relationships.

network-analysis information-retrieval recommendation-systems knowledge-graph-analysis natural-language-processing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

32

Forks

2

Language

Python

License

Apache-2.0

Last pushed

Feb 10, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/PeterGriffinJin/Patton"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.