yubol-bobo/MT-Consistency
This repo investigates LLMs' tendency to exhibit acquiescence bias in sequential QA interactions. Includes evaluation methods, datasets, benchmarks, and experiment code to assess and mitigate vulnerabilities in conversational consistency and robustness, offering a reproducible framework for future research.
This project helps evaluate how consistently large language models (LLMs) respond during multi-turn conversations, especially in critical applications. It takes an LLM's responses to a series of related questions and assesses how stable and reliable those answers are over time. This is for researchers and developers working on AI applications where consistent, trustworthy LLM behavior is essential, such as in finance or healthcare.
No commits in the last 6 months.
Use this if you need to rigorously test the consistency and reliability of an LLM's answers across multiple follow-up questions or conversational turns.
Not ideal if you are looking for a general-purpose LLM evaluation tool for single-turn accuracy or creative writing, rather than multi-turn consistency.
Stars
49
Forks
1
Language
Python
License
MIT
Category
Last pushed
Sep 23, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/yubol-bobo/MT-Consistency"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MMMU-Benchmark/MMMU
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal...
pat-jj/DeepRetrieval
[COLM’25] DeepRetrieval — 🔥 Training Search Agent by RLVR with Retrieval Outcome
lupantech/MathVista
MathVista: data, code, and evaluation for Mathematical Reasoning in Visual Contexts
x66ccff/liveideabench
[𝐍𝐚𝐭𝐮𝐫𝐞 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬] 🤖💡 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea...
ise-uiuc/magicoder
[ICML'24] Magicoder: Empowering Code Generation with OSS-Instruct