JosephTLucas/lintML

A security-first linter for code that shouldn't need linting

22
/ 100
Experimental

This tool helps machine learning researchers and security teams identify potential security risks in their Python and Jupyter Notebook code, especially during the experimental and training phases. It takes your project directory as input and checks for issues like plaintext credentials, unsafe deserialization, or untrustworthy assets. The output is a summary report detailing any vulnerabilities found, helping ensure research integrity before code moves to production.

No commits in the last 6 months.

Use this if you are a machine learning researcher or security analyst who needs quick security checks for your Python scripts or Jupyter Notebooks, focusing on preventing training-time vulnerabilities without extensive setup.

Not ideal if you primarily need a linter for code formatting or are working with machine learning libraries other than PyTorch, as broader library support is still under development.

machine-learning-security research-integrity data-science-vulnerability-scan ai-safety jupyter-notebook-security
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

18

Forks

Language

Python

License

GPL-3.0

Category

ai-red-teaming

Last pushed

Sep 12, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/JosephTLucas/lintML"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.