alexandra-chron/hierarchical-domain-adaptation

Code of NAACL 2022 "Efficient Hierarchical Domain Adaptation for Pretrained Language Models" paper.

24
/ 100
Experimental

This tool helps machine learning engineers and researchers fine-tune large language models like GPT-2 more effectively when working with data from many different specialized domains. You provide various domain-specific text datasets, and the tool outputs a more efficient, adapted language model that performs better across these diverse domains, even on new, unseen ones. This is for users who manage and deploy large language models in specialized applications.

No commits in the last 6 months.

Use this if you need to adapt a large language model to perform well across multiple distinct text domains (e.g., legal, medical, financial) and want to do so more efficiently than traditional methods.

Not ideal if you are working with a single text domain or do not have a strong need for fine-tuning large language models across diverse, hierarchical datasets.

natural-language-processing large-language-models domain-adaptation machine-learning-engineering text-analytics
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 9 / 25

How are scores calculated?

Stars

32

Forks

3

Language

Python

License

Last pushed

Sep 26, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/alexandra-chron/hierarchical-domain-adaptation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.