zhaochen0110/Cotempqa
Code and data for "Living in the Moment: Can Large Language Models Grasp Co-Temporal Reasoning?" (ACL 2024)
This project offers a comprehensive benchmark, CotempQA, to evaluate how well large language models (LLMs) understand and reason about events that happen at the same time. It takes various co-temporal scenarios as input and measures the LLM's ability to answer related questions. Researchers and practitioners working with LLMs would use this to assess and improve temporal reasoning capabilities.
No commits in the last 6 months.
Use this if you need to rigorously test or develop large language models for their ability to accurately understand and process information about concurrent events and their relationships.
Not ideal if you are looking for a tool to directly apply LLMs to solve specific real-world problems, rather than evaluate their fundamental temporal reasoning capabilities.
Stars
32
Forks
1
Language
Python
License
—
Category
Last pushed
Jul 03, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zhaochen0110/Cotempqa"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cvs-health/uqlm
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM...
PRIME-RL/TTRL
[NeurIPS 2025] TTRL: Test-Time Reinforcement Learning
sapientinc/HRM
Hierarchical Reasoning Model Official Release
tigerchen52/query_level_uncertainty
query-level uncertainty in LLMs
reasoning-survey/Awesome-Reasoning-Foundation-Models
✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models