zjunlp/NLPCC2024_RegulatingLLM
[NLPCC 2024] Shared Task 10: Regulating Large Language Models
This project helps AI developers and researchers tackle two critical issues with large language models (LLMs): detecting 'hallucinations' in multimodal outputs and preventing the generation of toxic content. It provides datasets and evaluation methods for building and testing solutions that take multimodal prompts (text, images) or text-based prompts as input, and produce either flags for hallucinatory content or detoxified model responses.
No commits in the last 6 months.
Use this if you are a machine learning researcher or developer working on making LLMs more reliable and safe, specifically focusing on identifying and mitigating false or harmful outputs.
Not ideal if you are a practitioner looking for a ready-to-use tool to filter LLM outputs without developing custom detection or detoxification algorithms.
Stars
14
Forks
3
Language
—
License
MIT
Category
Last pushed
Jun 12, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zjunlp/NLPCC2024_RegulatingLLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
TrustedLLM/LLMDet
LLMDet is a text detection tool that can identify which generated sources the text came from...
honghanhh/semeval8
L3i++ at SemEval2024-task8: Multidomain, Multimodel and Multilingual Machine-Generated Text Detection
MSPoulaei/code-smell-detection-with-LLM
the implementation of a research project focused on detecting code smells using Large Language...
awsaf49/detect-fake-text
LLM - Detect AI Generated Text || Identify which essay was written by a large language model
kodlan/pii-llm
PII detection/redaction using LLMs. A research project exploring the effectiveness of Large...