open-compass/LawBench
Benchmarking Legal Knowledge of Large Language Models
This tool helps legal professionals, researchers, or anyone evaluating AI, understand how well large language models (LLMs) perform on various legal tasks specific to the Chinese legal system. It takes an LLM's responses to legal queries and scenarios, then outputs a detailed assessment of its legal knowledge, comprehension, and application abilities. Legal domain experts can use this to gauge an AI's readiness for real-world legal applications.
406 stars. No commits in the last 6 months.
Use this if you need to rigorously evaluate a large language model's capabilities across a wide range of legal tasks, from recalling statutes to complex case analysis, within the context of Chinese law.
Not ideal if your focus is on legal systems outside of China, such as American law, or if you only need a basic understanding of an LLM's general knowledge.
Stars
406
Forks
70
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 13, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/open-compass/LawBench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Compare
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems