rkl71/MambaRec
[CIKM 2025] Source code for "Modality Alignment with Multi-scale Bilateral Attention for Multimodal Recommendation".
This project helps e-commerce companies and online retailers improve product recommendations for customers by analyzing both product descriptions (text) and images. It takes your existing text and image data for products and outputs better-personalized product suggestions. E-commerce managers, data scientists in retail, and product recommendation specialists would use this.
No commits in the last 6 months.
Use this if you need to generate more accurate product recommendations by leveraging both textual and visual information about your products.
Not ideal if you only have one type of data (e.g., just text or just images) or are looking for a plug-and-play solution without any technical setup.
Stars
9
Forks
—
Language
Python
License
MIT
Category
Last pushed
Oct 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/rkl71/MambaRec"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-mmlab/mmpretrain
OpenMMLab Pre-training Toolbox and Benchmark
facebookresearch/mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
HuaizhengZhang/Awsome-Deep-Learning-for-Video-Analysis
Papers, code and datasets about deep learning and multi-modal learning for video analysis
KaiyangZhou/pytorch-vsumm-reinforce
Unsupervised video summarization with deep reinforcement learning (AAAI'18)
adambielski/siamese-triplet
Siamese and triplet networks with online pair/triplet mining in PyTorch