thu-coai/Safety-Prompts

Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。

43
/ 100
Emerging

This project provides a comprehensive collection of Chinese safety prompts and corresponding AI model responses. It helps developers and researchers train and fine-tune large language models to be safer and more aligned with human values by providing a dataset of challenging, unsafe prompts. The output is a more robust and ethically sound AI model capable of handling diverse and sensitive user inputs appropriately.

1,135 stars. No commits in the last 6 months.

Use this if you are a large language model developer or researcher aiming to improve your Chinese LLM's safety features and alignment with human values during training or fine-tuning.

Not ideal if you primarily need to evaluate an LLM's safety; for that, refer to the recommended SafetyBench or ShieldLM platforms, which offer more dedicated evaluation capabilities.

AI Safety Large Language Models Ethical AI Chinese NLP Model Fine-tuning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

1,135

Forks

88

Language

License

Apache-2.0

Category

guardrails

Last pushed

Feb 27, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/thu-coai/Safety-Prompts"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.