anonymous-author-sub/seeable

SeeABLE: Soft Discrepancies and Bounded Contrastive Learning for Exposing Deepfakes

19
/ 100
Experimental

This project helps you identify deepfake videos or images, distinguishing them from authentic media. You provide video files or images as input, and it helps determine if they have been manipulated or synthetically generated. This tool is designed for media analysts, journalists, forensic investigators, or anyone needing to verify the authenticity of visual content.

No commits in the last 6 months.

Use this if you need to reliably detect whether a video or image is a deepfake to prevent misinformation or verify evidence.

Not ideal if you need a tool for basic image or video editing, or for enhancing media quality rather than authenticating it.

media-verification deepfake-detection digital-forensics misinformation-combat content-authenticity
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 5 / 25

How are scores calculated?

Stars

19

Forks

1

Language

Python

License

Last pushed

Jun 01, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/anonymous-author-sub/seeable"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.