arazd/ResidualPrompts

Residual Prompt Tuning: a method for faster and better prompt tuning.

38
/ 100
Emerging

This project offers a refined approach to prompt tuning, a technique used to adapt large language models (LLMs) for specific tasks without retraining the entire model. It takes an existing LLM and a set of task-specific data, and outputs a more accurate and stable 'soft prompt' that improves the LLM's performance for that task. This is for machine learning engineers or researchers who are fine-tuning LLMs for various natural language processing applications.

No commits in the last 6 months.

Use this if you are a machine learning practitioner looking to improve the efficiency and performance of adapting large language models for specific downstream tasks.

Not ideal if you are not working directly with prompt tuning methods for large language models, or if you prefer full model fine-tuning over prompt-based approaches.

natural-language-processing large-language-models model-adaptation prompt-engineering machine-learning-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

57

Forks

8

Language

Python

License

Apache-2.0

Last pushed

May 10, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/arazd/ResidualPrompts"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.