LaMP-Benchmark/LaMP
Codes for papers on Large Language Models Personalization (LaMP)
This project helps researchers and developers create and evaluate language models that can generate personalized content for individual users. It provides a benchmark with various tasks, such as personalized email subject generation, allowing models to take user profiles as input and produce tailored outputs. The end-users are typically AI/ML researchers, data scientists, or NLP practitioners working on personalization systems.
188 stars. No commits in the last 6 months.
Use this if you are developing or researching large language models and need a standardized way to evaluate their ability to generate personalized text or classifications.
Not ideal if you are looking for an out-of-the-box, pre-trained personalized language model for immediate deployment in a business application.
Stars
188
Forks
11
Language
Python
License
—
Category
Last pushed
Feb 18, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/LaMP-Benchmark/LaMP"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Goekdeniz-Guelmez/mlx-lm-lora
Train Large Language Models on MLX.
uber-research/PPLM
Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.
VHellendoorn/Code-LMs
Guide to using pre-trained large language models of source code
ssbuild/chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
jarobyte91/pytorch_beam_search
A lightweight implementation of Beam Search for sequence models in PyTorch.