uber-research/PPLM
Plug and Play Language Model implementation. Allows to steer topic and attributes of GPT-2 models.
This project helps researchers and developers create text with large language models, like GPT-2, that adheres to specific topics or sentiments without needing to retrain the model. You provide an initial text prompt and specify desired attributes (like 'military' topic or 'positive' sentiment). The output is a continuation of your text that aligns with these specified controls, ideal for exploring creative text generation or specific content creation. Anyone working with text generation and seeking more control over the output would find this valuable.
1,155 stars. No commits in the last 6 months.
Use this if you need to generate text that follows a particular theme, topic, or emotional tone using existing large language models without extensive fine-tuning.
Not ideal if you need to train a language model from scratch or are looking for fine-grained control over individual word choices rather than overall topic/sentiment.
Stars
1,155
Forks
204
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 20, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/uber-research/PPLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
Goekdeniz-Guelmez/mlx-lm-lora
Train Large Language Models on MLX.
VHellendoorn/Code-LMs
Guide to using pre-trained large language models of source code
ssbuild/chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
jarobyte91/pytorch_beam_search
A lightweight implementation of Beam Search for sequence models in PyTorch.
SmallDoges/small-doge
Doge Family of Small Language Models