ROIM1998/APT

[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference

33
/ 100
Emerging

This project helps machine learning engineers and researchers optimize large language models (LLMs) for specific tasks. It takes a pre-trained language model and a fine-tuning dataset as input, then outputs a more compact and efficient model that performs well on the target task. This is ideal for those who need to deploy LLMs with limited computational resources.

No commits in the last 6 months.

Use this if you are a machine learning practitioner struggling with the high computational cost of fine-tuning and deploying large language models while aiming to maintain strong performance.

Not ideal if you are a casual user looking for an out-of-the-box, no-code solution for general text generation or analysis.

large-language-models model-optimization natural-language-processing resource-constrained-ai machine-learning-engineering
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

47

Forks

4

Language

Python

License

MIT

Last pushed

Jun 04, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ROIM1998/APT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.