AndyChiangSH/BADGE
Code for our paper, "BADGE: BADminton report Generation and Evaluation with LLM," presented at the IJCAI 2024 Workshop IT4PSS.
This tool helps automate the creation and assessment of detailed badminton match reports, a task that is often time-consuming. It takes structured badminton game data, like CSV files containing player names, scores, and ball types, and uses an AI to generate a comprehensive match report. It also evaluates the quality of these reports. Sports journalists, content creators, or event organizers in the badminton world could use this to efficiently produce game summaries.
No commits in the last 6 months.
Use this if you need to quickly generate and evaluate high-quality, detailed badminton match reports from game data.
Not ideal if you prefer to write all your reports manually or if you're not working with structured game data.
Stars
9
Forks
—
Language
Python
License
MIT
Category
Last pushed
Jul 22, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/AndyChiangSH/BADGE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
PacificAI/langtest
Deliver safe & effective language models
microsoft/OpenRCA
[ICLR'25] OpenRCA: Can Large Language Models Locate the Root Cause of Software Failures?
Babelscape/ALERT
Official repository for the paper "ALERT: A Comprehensive Benchmark for Assessing Large Language...
TrustGen/TrustEval-toolkit
[ICLR'26, NAACL'25 Demo] Toolkit & Benchmark for evaluating the trustworthiness of generative...
ChenWu98/agent-attack
[ICLR 2025] Dissecting adversarial robustness of multimodal language model agents