ShiZhengyan/DePT

[ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"

40
/ 100
Emerging

This project helps machine learning engineers efficiently fine-tune large language models (LLMs) for various natural language processing and vision-language tasks. It takes an existing LLM and task-specific datasets as input, then outputs a fine-tuned model that performs better on the specific task with significantly reduced memory and time costs compared to standard methods. This is ideal for AI/ML practitioners working with LLMs who need to adapt them to new datasets without incurring massive computational expenses.

102 stars. No commits in the last 6 months.

Use this if you need to adapt large language models for specific NLP or vision-language tasks efficiently, especially when computational resources or inference speed are a concern.

Not ideal if you prefer a simpler fine-tuning approach without managing additional hyperparameters, or if your tasks are not resource-intensive enough to warrant optimizing for memory and time.

natural-language-processing large-language-models machine-learning-operations model-fine-tuning computational-efficiency
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

102

Forks

15

Language

Python

License

MIT

Last pushed

Apr 10, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ShiZhengyan/DePT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.