declare-lab/trust-align

Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse

25
/ 100
Experimental

This project helps developers and researchers working with Large Language Models (LLMs) in Retrieval-Augmented Generation (RAG) systems. It provides frameworks to both evaluate the trustworthiness of an LLM's responses and to train LLMs to be more trustworthy. You input an LLM's generated answers, and it outputs a 'Trust-Score' based on correctness, citation quality, and refusal groundedness, or you input training data and get a more trustworthy LLM.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher who needs to quantify and improve how truthful and reliable an LLM's responses are within a RAG system.

Not ideal if you are looking for an off-the-shelf solution for general LLM application development that doesn't involve deep evaluation or training of RAG components.

Large Language Models Retrieval-Augmented Generation LLM Evaluation Model Alignment AI Trustworthiness
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 9 / 25
Maturity 8 / 25
Community 8 / 25

How are scores calculated?

Stars

72

Forks

5

Language

Python

License

Last pushed

Mar 03, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/declare-lab/trust-align"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.