plll4zzx/Awesome-LLM-Watermark
A collection list for Large Language Model (LLM) Watermark
This collection helps researchers and practitioners explore various techniques for embedding hidden signals, or "watermarks," into text generated by large language models (LLMs). It compiles a wide range of academic papers, categorizing them by the specific method of watermarking (e.g., token-level, sentence-level) and related topics like watermark attacks and robustness. Anyone working on developing or deploying LLMs and concerned with content provenance, intellectual property, or detecting AI-generated text will find this resource valuable.
Use this if you need to understand the current landscape of LLM watermarking techniques, including how they work, their strengths, and their vulnerabilities.
Not ideal if you are looking for a ready-to-use software tool or a step-by-step guide to implement a specific watermark, as this is a research compilation.
Stars
58
Forks
2
Language
—
License
—
Category
Last pushed
Feb 05, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/plll4zzx/Awesome-LLM-Watermark"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vectara/hallucination-leaderboard
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
PKU-YuanGroup/Hallucination-Attack
Attack to induce LLMs within hallucinations
amir-hameed-mir/Sirraya_LSD_Code
Layer-wise Semantic Dynamics (LSD) is a model-agnostic framework for hallucination detection in...
NishilBalar/Awesome-LVLM-Hallucination
up-to-date curated list of state-of-the-art Large vision language models hallucinations...
intuit/sac3
Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via...