Awsome-Deep-Learning-for-Video-Analysis and Awesome-Multimodal-Papers

Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 14/25
Stars: 836
Forks: 174
Downloads:
Commits (30d): 0
Language:
License: MIT
Stars: 317
Forks: 23
Downloads:
Commits (30d): 0
Language:
License: MIT
Stale 6m No Package No Dependents
No Package No Dependents

About Awsome-Deep-Learning-for-Video-Analysis

HuaizhengZhang/Awsome-Deep-Learning-for-Video-Analysis

Papers, code and datasets about deep learning and multi-modal learning for video analysis

This resource helps researchers and practitioners explore cutting-edge techniques for understanding video content. It organizes papers, datasets, and tools, focusing on how to combine information from multiple sources like video, audio, and text. People working on video analysis projects, especially those involving multimodal data, will find this collection useful for discovering new methods and resources.

video-analysis multimodal-learning computer-vision machine-learning-research data-science

About Awesome-Multimodal-Papers

friedrichor/Awesome-Multimodal-Papers

A curated list of awesome Multimodal studies.

This is a curated collection of research papers focused on multimodal studies, which combine different types of data like images, text, and audio. It allows researchers to quickly find relevant studies, including their publication venues and optional code or project pages. The primary users are researchers and academics in fields like AI, machine learning, and computer vision who need to stay updated on the latest advancements in combining multiple data modalities.

AI-research machine-learning computer-vision natural-language-processing academic-research

Scores updated daily from GitHub, PyPI, and npm data. How scores work