kriskrisliu/PAT

[AAAI 2025] PAT: Pruning-Aware Tuning for Large Language Models

18
/ 100
Experimental

This project offers a method for making Large Language Models (LLMs) like Llama2 and Gemma more efficient without losing much performance. It takes an existing LLM and training data, then outputs a smaller, faster LLM that's easier to deploy. This is for machine learning engineers and researchers who are developing and deploying custom LLMs.

No commits in the last 6 months.

Use this if you need to reduce the size and computational cost of a Large Language Model for easier deployment while maintaining its core capabilities.

Not ideal if you are looking for a pre-trained, ready-to-use LLM or if you do not have the technical expertise to fine-tune and prune models.

Large Language Models LLM Optimization Model Pruning Fine-tuning Machine Learning Deployment
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 3 / 25

How are scores calculated?

Stars

36

Forks

1

Language

Python

License

Last pushed

Feb 01, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kriskrisliu/PAT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.