panruotong/CAG

Implementation of Not All Contexts Are Equal: Teaching LLMs Credibility-aware Generation. Paper: https://arxiv.org/abs/2404.06809

30
/ 100
Emerging

This project helps evaluate how well large language models (LLMs) answer questions when provided with source documents that have varying levels of credibility. You input a question and several source documents, each marked with a credibility rating (high, medium, or low), and it assesses the quality of the LLM's answer. This is primarily useful for researchers and developers working with LLMs in question-answering systems.

No commits in the last 6 months.

Use this if you are a researcher or developer who needs to evaluate the performance of LLMs in generating accurate and reliable answers from potentially mixed-credibility sources.

Not ideal if you are looking for a ready-to-use application to answer your own questions without needing to assess or develop LLMs.

LLM evaluation credibility assessment question answering natural language processing AI research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

22

Forks

2

Language

Python

License

MIT

Last pushed

Oct 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/panruotong/CAG"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.