zjunlp/NLPCC2024_RegulatingLLM

[NLPCC 2024] Shared Task 10: Regulating Large Language Models

35
/ 100
Emerging

This project helps AI developers and researchers tackle two critical issues with large language models (LLMs): detecting 'hallucinations' in multimodal outputs and preventing the generation of toxic content. It provides datasets and evaluation methods for building and testing solutions that take multimodal prompts (text, images) or text-based prompts as input, and produce either flags for hallucinatory content or detoxified model responses.

No commits in the last 6 months.

Use this if you are a machine learning researcher or developer working on making LLMs more reliable and safe, specifically focusing on identifying and mitigating false or harmful outputs.

Not ideal if you are a practitioner looking for a ready-to-use tool to filter LLM outputs without developing custom detection or detoxification algorithms.

AI Safety Large Language Models Content Moderation Multimodal AI Responsible AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

14

Forks

3

Language

License

MIT

Last pushed

Jun 12, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zjunlp/NLPCC2024_RegulatingLLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.