zjunlp/ChineseHarm-bench
ChineseHarm-Bench: A Chinese Harmful Content Detection Benchmark
This project provides a benchmark and models for detecting harmful content in Chinese text across six categories like gambling, pornography, and fraud. It takes Chinese text as input and outputs classifications indicating whether the content is harmful and, if so, its specific category. Content moderators, trust & safety teams, or platform administrators dealing with user-generated content in Chinese would find this tool valuable.
No commits in the last 6 months.
Use this if you need to automatically identify and categorize harmful or policy-violating content within large volumes of Chinese text.
Not ideal if your primary need is for content moderation in languages other than Chinese, or if you require real-time human-in-the-loop content review workflows.
Stars
49
Forks
2
Language
Python
License
MIT
Category
Last pushed
Sep 02, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/zjunlp/ChineseHarm-bench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems