ruisizhang123/REMARK-LLM
[USENIX Security'24] REMARK-LLM: A robust and efficient watermarking framework for generative large language models
This framework helps you verify the origin of text generated by large language models (LLMs). It takes text from an LLM and applies an invisible 'watermark' to it during generation, which can then be detected later to prove the text came from your specific model. Content creators, researchers, and platform owners who generate text with LLMs would use this to protect and attribute their AI-generated content.
No commits in the last 6 months.
Use this if you need to reliably identify and verify that specific content was generated by your large language model, even if it's been edited or paraphrased.
Not ideal if you are looking to watermark general digital content like images or audio, or if you need to detect plagiarism from human authors.
Stars
27
Forks
1
Language
Python
License
—
Category
Last pushed
Oct 23, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ruisizhang123/REMARK-LLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vectara/hallucination-leaderboard
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
PKU-YuanGroup/Hallucination-Attack
Attack to induce LLMs within hallucinations
amir-hameed-mir/Sirraya_LSD_Code
Layer-wise Semantic Dynamics (LSD) is a model-agnostic framework for hallucination detection in...
NishilBalar/Awesome-LVLM-Hallucination
up-to-date curated list of state-of-the-art Large vision language models hallucinations...
intuit/sac3
Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via...