MaurizioFD/RecSys2019_DeepLearning_Evaluation

This is the repository of our article published in RecSys 2019 "Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches" and of several follow-up studies.

51
/ 100
Established

This project helps recommender systems researchers and practitioners objectively compare the performance of different recommendation algorithms. It provides a framework to evaluate various deep learning and baseline recommender models using standardized metrics and datasets. Researchers can input their chosen dataset and a set of algorithms to receive comprehensive performance metrics, aiding in robust scientific comparison and methodology improvement.

985 stars. No commits in the last 6 months.

Use this if you are a researcher or practitioner in recommender systems needing to rigorously evaluate and compare the effectiveness of different recommendation algorithms, especially deep learning approaches, on various datasets.

Not ideal if you are looking for a plug-and-play recommender system for immediate deployment rather than a research tool for comparative analysis.

recommender-systems algorithm-evaluation machine-learning-research data-science-methodology reproducibility-studies
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

985

Forks

252

Language

Python

License

AGPL-3.0

Last pushed

May 25, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/MaurizioFD/RecSys2019_DeepLearning_Evaluation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.