SuperBruceJia/Awesome-LLM-Self-Consistency

Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models

39
/ 100
Emerging

To get reliable results from large language models, you need to understand how consistently they answer questions or complete tasks. This resource provides a curated collection of research papers and benchmarks focused on 'self-consistency' in LLMs. It helps researchers and AI practitioners evaluate and improve the dependability of their language models.

120 stars. No commits in the last 6 months.

Use this if you are a researcher or practitioner working with large language models and need to evaluate or improve their reliability in reasoning, factual accuracy, or logical coherence.

Not ideal if you are looking for an off-the-shelf tool or software to directly improve your LLM's consistency without delving into academic research.

AI research Natural Language Processing LLM evaluation AI model reliability Machine Learning engineering
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

120

Forks

10

Language

License

MIT

Last pushed

Jul 20, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/SuperBruceJia/Awesome-LLM-Self-Consistency"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.