hongcheki/sweet-watermark
Official repository of the paper: Who Wrote this Code? Watermarking for Code Generation (ACL 2024)
This project helps developers identify if a piece of code was written by a human or generated by an AI model. It allows you to input code that you suspect might be machine-generated and determines the likelihood of it containing a 'watermark' from a code generation model. This is useful for developers who need to verify the origin or authorship of code, especially in contexts where AI-assisted programming is common.
No commits in the last 6 months.
Use this if you need to determine the origin of a code snippet, specifically whether it was generated by an AI or written by a human.
Not ideal if you're looking for tools to improve code quality, find bugs, or simply format code, as its primary function is authorship detection.
Stars
40
Forks
8
Language
Python
License
—
Category
Last pushed
May 28, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/hongcheki/sweet-watermark"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vectara/hallucination-leaderboard
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
PKU-YuanGroup/Hallucination-Attack
Attack to induce LLMs within hallucinations
amir-hameed-mir/Sirraya_LSD_Code
Layer-wise Semantic Dynamics (LSD) is a model-agnostic framework for hallucination detection in...
NishilBalar/Awesome-LVLM-Hallucination
up-to-date curated list of state-of-the-art Large vision language models hallucinations...
intuit/sac3
Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via...