george0st/qgate-model

ML/AI meta-model, used in MLRun/Iguazio/Nuclio, see qgate-sln-

40
/ 100
Emerging

This project helps MLOps engineers, data scientists, or solution architects evaluate and test different machine learning platforms or solutions. It uses a standardized machine learning model definition (in JSON) and synthetic datasets (CSV/Parquet) to compare the capabilities, functions, and quality of various ML systems. The output helps users understand how well different solutions handle common ML workflows and data types.

414 stars. No commits in the last 6 months.

Use this if you need an independent way to benchmark, test new versions, or perform comprehensive quality assurance on machine learning platforms and feature stores.

Not ideal if you are looking for a tool to build or train your own custom machine learning models directly, rather than evaluate ML solutions.

MLOps ML Platform Evaluation Feature Store Testing Machine Learning QA Solution Benchmarking
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

414

Forks

21

Language

Python

License

Apache-2.0

Last pushed

Aug 17, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mlops/george0st/qgate-model"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.