sigstore/model-transparency
Supply chain security for ML
Verifies the integrity and origin of machine learning models. It takes a machine learning model file or directory as input and produces a verifiable signature, allowing users to confirm that the model hasn't been tampered with since it was trained and signed. This is for data scientists, ML engineers, or anyone deploying and using ML models in production who needs to trust their models.
226 stars.
Use this if you need to ensure the machine learning models you are using come from a trusted source and haven't been maliciously altered.
Not ideal if you are looking for tools to evaluate model performance, detect bias, or manage model versions.
Stars
226
Forks
60
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/sigstore/model-transparency"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
TalEliyahu/Awesome-AI-Security
Curated resources, research, and tools for securing AI systems
The-Art-of-Hacking/h4cker
This repository is maintained by Omar Santos (@santosomar) and includes thousands of resources...
aw-junaid/Hacking-Tools
This Repository is a collection of different ethical hacking tools and malware's for penetration...
jiep/offensive-ai-compilation
A curated list of useful resources that cover Offensive AI.
Kim-Hammar/csle
A research platform to develop automated security policies using quantitative methods, e.g.,...