lmms-eval and evaluation-guidebook
The comprehensive multimodal evaluation toolkit (A) and the LLM evaluation guidebook (B) are complementary, with (A) providing the practical implementation for a broad range of multimodal tasks and (B) offering theoretical knowledge and insights specifically for large language model evaluation, which could inform the use and interpretation of tool (A) for text-based tasks.
About lmms-eval
EvolvingLMMs-Lab/lmms-eval
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
This tool helps researchers and AI practitioners reliably compare how well different multimodal AI models understand and respond to various types of real-world information. You provide an AI model and a set of diverse tasks involving text, images, video, and audio, and it outputs consistent, trustworthy performance metrics. Anyone who builds, deploys, or studies large multimodal models will find this useful for understanding model capabilities.
About evaluation-guidebook
huggingface/evaluation-guidebook
Sharing both practical insights and theoretical knowledge about LLM evaluation that we gathered while managing the Open LLM Leaderboard and designing lighteval!
This guide helps you verify that a large language model (LLM) performs as expected for your specific use case. It takes your LLM and task requirements as input and provides frameworks, methodologies, and practical tips for assessing model quality. This is ideal for anyone working with LLMs, from researchers to hobbyists, who needs to ensure their models are reliable and effective.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work