JosephTLucas/lintML
A security-first linter for code that shouldn't need linting
This tool helps machine learning researchers and security teams identify potential security risks in their Python and Jupyter Notebook code, especially during the experimental and training phases. It takes your project directory as input and checks for issues like plaintext credentials, unsafe deserialization, or untrustworthy assets. The output is a summary report detailing any vulnerabilities found, helping ensure research integrity before code moves to production.
No commits in the last 6 months.
Use this if you are a machine learning researcher or security analyst who needs quick security checks for your Python scripts or Jupyter Notebooks, focusing on preventing training-time vulnerabilities without extensive setup.
Not ideal if you primarily need a linter for code formatting or are working with machine learning libraries other than PyTorch, as broader library support is still under development.
Stars
18
Forks
—
Language
Python
License
GPL-3.0
Category
Last pushed
Sep 12, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/JosephTLucas/lintML"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
TalEliyahu/Awesome-AI-Security
Curated resources, research, and tools for securing AI systems
The-Art-of-Hacking/h4cker
This repository is maintained by Omar Santos (@santosomar) and includes thousands of resources...
aw-junaid/Hacking-Tools
This Repository is a collection of different ethical hacking tools and malware's for penetration...
jiep/offensive-ai-compilation
A curated list of useful resources that cover Offensive AI.
Kim-Hammar/csle
A research platform to develop automated security policies using quantitative methods, e.g.,...