readytensor/rt-repo-assessment

This project implements an assessment tool to evaluate the quality of a AI/ML project repositories using LLMs and rule based methods

41
/ 100
Emerging

This tool helps AI/ML practitioners and engineering teams evaluate the quality of AI/ML and data science GitHub repositories. You provide a GitHub repository URL, and it generates a detailed assessment report, highlighting strengths and weaknesses across documentation, architecture, dependencies, licensing, and code quality. It's designed for anyone managing or sharing AI projects who needs to ensure their repositories meet best practices and industry standards.

No commits in the last 6 months.

Use this if you need an automated way to ensure your AI/ML project repositories adhere to best practices before sharing them or as part of an internal quality assurance process.

Not ideal if you're looking for a general-purpose code linter for non-AI/ML projects or a tool that fixes code directly rather than just assessing it.

AI-project-management MLOps code-quality-assurance data-science-workflow repository-standards
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

10

Forks

14

Language

Python

License

MIT

Last pushed

Sep 23, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/readytensor/rt-repo-assessment"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.