ruisizhang123/REMARK-LLM

[USENIX Security'24] REMARK-LLM: A robust and efficient watermarking framework for generative large language models

19
/ 100
Experimental

This framework helps you verify the origin of text generated by large language models (LLMs). It takes text from an LLM and applies an invisible 'watermark' to it during generation, which can then be detected later to prove the text came from your specific model. Content creators, researchers, and platform owners who generate text with LLMs would use this to protect and attribute their AI-generated content.

No commits in the last 6 months.

Use this if you need to reliably identify and verify that specific content was generated by your large language model, even if it's been edited or paraphrased.

Not ideal if you are looking to watermark general digital content like images or audio, or if you need to detect plagiarism from human authors.

AI content attribution LLM output verification digital rights management AI model security content provenance
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 4 / 25

How are scores calculated?

Stars

27

Forks

1

Language

Python

License

Last pushed

Oct 23, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ruisizhang123/REMARK-LLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.