LaMP-Benchmark/LaMP-QA
Codes for paper: "LaMP-QA: A Benchmark for Personalized Long-form Question Answering"
This project provides a benchmark dataset and evaluation tools for building personalized question-answering systems. It takes a user's question and relevant personal context as input and helps evaluate how well different models can generate long-form answers tailored to that specific user. Researchers and developers working on AI-driven customer support, content recommendation, or personalized learning platforms would find this useful.
No commits in the last 6 months.
Use this if you are developing or evaluating AI systems that need to generate tailored, detailed answers based on individual user profiles or past interactions.
Not ideal if you are looking for a plug-and-play personalized QA system rather than a benchmark for research and development.
Stars
11
Forks
1
Language
Python
License
—
Category
Last pushed
Jun 03, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/LaMP-Benchmark/LaMP-QA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
GrapeCity-AI/gc-qa-rag
A RAG (Retrieval-Augmented Generation) solution Based on Advanced Pre-generated QA Pairs. 基于高级...
UKPLab/PeerQA
Code and Data for PeerQA: A Scientific Question Answering Dataset from Peer Reviews, NAACL 2025
Arfazrll/RAG-DocsInsight-Engine
Retrieval Augmented Generation (RAG) engine for intelligent document analysis. integrating LLM,...
faerber-lab/SQuAI
SQuAI: Scientific Question-Answering with Multi-Agent Retrieval-Augmented Generation (CIKM'25)
Vbj1808/Dokis
Lightweight RAG provenance middleware. Verifies every claim in an LLM response is grounded in a...