lilakk/PostMark
Official repository for "PostMark: A Robust Blackbox Watermark for Large Language Models"
This tool helps content creators, educators, or anyone generating text with large language models (LLMs) to secretly embed an invisible mark into their AI-generated content. You provide a piece of text created by an LLM, and it outputs a slightly modified version with an imperceptible 'watermark'. This allows you to later prove the text originated from your system or process.
No commits in the last 6 months.
Use this if you need to verify the origin of text generated by blackbox LLMs like GPT-4, especially when distributing content and wanting to attribute or track its source.
Not ideal if you need a watermarking solution that deeply integrates with the LLM's internal mechanics, as this method works post-generation without model access.
Stars
27
Forks
3
Language
Python
License
—
Category
Last pushed
Aug 30, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/lilakk/PostMark"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vectara/hallucination-leaderboard
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
PKU-YuanGroup/Hallucination-Attack
Attack to induce LLMs within hallucinations
amir-hameed-mir/Sirraya_LSD_Code
Layer-wise Semantic Dynamics (LSD) is a model-agnostic framework for hallucination detection in...
NishilBalar/Awesome-LVLM-Hallucination
up-to-date curated list of state-of-the-art Large vision language models hallucinations...
intuit/sac3
Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via...