TamSiuhin/P2P
source code for "Instant Personalized Large Language Model Adaptation via Hypernetwork"
This project helps AI developers and researchers quickly adapt large language models (LLMs) to individual user preferences or specific domains. It takes a base LLM and user interaction data to produce a personalized LLM. This is useful for anyone building applications that require an LLM to generate responses tailored to a particular user's style or knowledge needs, without extensive retraining.
Use this if you need to rapidly personalize large language models for various users or specialized tasks using hypernetworks.
Not ideal if you are looking for a no-code solution or a tool for general LLM fine-tuning without a focus on instant, personalized adaptation.
Stars
9
Forks
1
Language
Python
License
Apache-2.0
Category
Last pushed
Dec 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/TamSiuhin/P2P"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DaoD/INTERS
This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in...
declare-lab/instruct-eval
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca...
Haiyang-W/TokenFormer
[ICLR2025 Spotlightš„] Official Implementation of TokenFormer: Rethinking Transformer Scaling...
hkust-nlp/deita
Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]
kehanlu/DeSTA2
Code and model for ICASSP 2025 Paper "Developing Instruction-Following Speech Language Model...