ChanLiang/WatME
[ACL 2024] WatME: Towards Lossless Watermarking Through Lexical Redundancy
This tool helps researchers and developers integrate imperceptible watermarks into text generated by Large Language Models (LLMs) without compromising the text's naturalness or expressiveness. It takes an LLM and a set of synonym clusters, and outputs text with an embedded, undetectable watermark. This is ideal for those who need to verify the origin of LLM-generated content while maintaining high linguistic quality.
No commits in the last 6 months.
Use this if you need to embed hidden identifiers into text generated by Large Language Models while preserving the full expressive power and natural language fluency of the output.
Not ideal if you are looking for visible watermarks or if your primary concern is robustly detecting malicious alterations rather than simply verifying generation origin.
Stars
8
Forks
—
Language
Python
License
—
Category
Last pushed
Jun 25, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ChanLiang/WatME"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
THU-BPM/MarkLLM
MarkLLM: An Open-Source Toolkit for LLM Watermarking.(EMNLP 2024 System Demonstration)
git-disl/Vaccine
This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large...
zjunlp/Deco
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
HillZhang1999/ICD
Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced...
voidism/DoLa
Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality...