LaMP-Benchmark/LaMP-QA

Codes for paper: "LaMP-QA: A Benchmark for Personalized Long-form Question Answering"

21
/ 100
Experimental

This project provides a benchmark dataset and evaluation tools for building personalized question-answering systems. It takes a user's question and relevant personal context as input and helps evaluate how well different models can generate long-form answers tailored to that specific user. Researchers and developers working on AI-driven customer support, content recommendation, or personalized learning platforms would find this useful.

No commits in the last 6 months.

Use this if you are developing or evaluating AI systems that need to generate tailored, detailed answers based on individual user profiles or past interactions.

Not ideal if you are looking for a plug-and-play personalized QA system rather than a benchmark for research and development.

Personalized AI Customer Support Automation Content Recommendation Adaptive Learning Information Retrieval
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 7 / 25
Community 7 / 25

How are scores calculated?

Stars

11

Forks

1

Language

Python

License

Last pushed

Jun 03, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/LaMP-Benchmark/LaMP-QA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.