declare-lab/trust-align
Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse
This project helps developers and researchers working with Large Language Models (LLMs) in Retrieval-Augmented Generation (RAG) systems. It provides frameworks to both evaluate the trustworthiness of an LLM's responses and to train LLMs to be more trustworthy. You input an LLM's generated answers, and it outputs a 'Trust-Score' based on correctness, citation quality, and refusal groundedness, or you input training data and get a more trustworthy LLM.
No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher who needs to quantify and improve how truthful and reliable an LLM's responses are within a RAG system.
Not ideal if you are looking for an off-the-shelf solution for general LLM application development that doesn't involve deep evaluation or training of RAG components.
Stars
72
Forks
5
Language
Python
License
—
Category
Last pushed
Mar 03, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/declare-lab/trust-align"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
NirDiamant/RAG_Techniques
This repository showcases various advanced techniques for Retrieval-Augmented Generation (RAG)...
VectorInstitute/fed-rag
A framework for fine-tuning retrieval-augmented generation (RAG) systems.
RUC-NLPIR/FlashRAG
⚡FlashRAG: A Python Toolkit for Efficient RAG Research (WWW2025 Resource)
ictnlp/FlexRAG
FlexRAG: A RAG Framework for Information Retrieval and Generation.
Andrew-Jang/RAGHub
A community-driven collection of RAG (Retrieval-Augmented Generation) frameworks, projects, and...