bvobart/mllint
`mllint` is a command-line utility to evaluate the technical quality of Python Machine Learning (ML) projects by means of static analysis of the project's repository.
This tool helps data scientists and ML engineers evaluate the technical quality of their Python Machine Learning projects. It takes your project's source code, data, and configuration as input, and provides a Markdown-formatted report detailing areas for improvement based on ML best practices. Anyone building or managing Python-based ML/AI projects can use this to ensure their work meets high standards.
No commits in the last 6 months. Available on PyPI.
Use this if you are a data scientist, ML engineer, or project manager looking to assess and improve the code quality and best practice adherence of a Python Machine Learning or AI project.
Not ideal if your project is not Python-based, or if you are looking for a tool that actively refactors or fixes code rather than providing a diagnostic report.
Stars
80
Forks
4
Language
Go
License
GPL-3.0
Category
Last pushed
Jun 20, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/bvobart/mllint"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jeinlee1991/chinese-llm-benchmark
ReLE评测:中文AI大模型能力评测(持续更新):目前已囊括359个大模型,覆盖chatgpt、gpt-5.2、o4-mini、谷歌gemini-3-pro、Claude-4.6、文心ERNIE...
ApextheBoss/canary
🐤 Know when your LLM provider silently degrades. Automated quality testing for AI models. Like...
Software-Engineering-Arena/SWE-Chatbot-Arena
Compare chatbots pairwise via multi‑round evaluations for SE tasks.
oolong-tea-2026/arena-ai-leaderboards
📊 Daily auto-updated snapshots of all Arena AI (LMSYS Chatbot Arena) leaderboards — LLM, Vision,...
abject-milkingmachine273/llm-cost-dashboard
Monitor LLM token costs in real time with a terminal dashboard offering per-request tracking,...