jyjohnchoi/SMoP

The repository contains the code for our EMNLP 2023 paper "SMoP: Towards Efficient and Effective Prompt Tuning with Sparse Mixture-of-Prompts", written by Joon-Young Choi, Junho Kim, Jun-Hyung Park, Mok-Wing Lam, and SangKeun Lee.

28
/ 100
Experimental

This project helps machine learning engineers and researchers optimize how large language models (LLMs) perform on specific tasks. It takes an LLM and a dataset, then applies a technique called Sparse Mixture-of-Prompts (SMoP) to efficiently train the model for better accuracy. The output is a fine-tuned LLM ready for deployment.

No commits in the last 6 months.

Use this if you need to improve the performance of a large language model on a particular natural language understanding task without the computational cost of full model fine-tuning.

Not ideal if you are a practitioner looking for a ready-to-use LLM for a common task, as this tool requires familiarity with machine learning training pipelines and custom dataset preparation.

natural-language-processing large-language-models prompt-engineering model-optimization machine-learning-research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

9

Forks

5

Language

Python

License

Last pushed

Nov 06, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/jyjohnchoi/SMoP"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.