yhao-wang/LLM-Knowledge-Boundary

Implementation of "Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation"

30
/ 100
Emerging

This project helps researchers understand how Large Language Models (LLMs) answer questions, especially when their internal knowledge might be limited or incorrect. You input questions and receive insights into whether the LLM is confident, unsure, or outright 'hallucinating' an answer. This is for AI researchers and practitioners who want to probe and improve the reliability of LLMs.

No commits in the last 6 months.

Use this if you are an AI researcher investigating the factual accuracy and reliability of large language models for question-answering tasks.

Not ideal if you are looking for a ready-to-use LLM application for end-users, or if you need to fine-tune an LLM for specific business data.

AI-research LLM-evaluation factual-consistency question-answering knowledge-boundaries
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

21

Forks

7

Language

Python

License

Last pushed

Jul 31, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/yhao-wang/LLM-Knowledge-Boundary"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.