kyegomez/MC-ViT
Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"
This helps researchers and AI practitioners analyze very long video sequences without losing important details. You input extended video footage, and it processes this to understand complex actions and events over time, outputting a consolidated representation of the video's content. It's designed for those who work with prolonged video data for analysis or understanding.
Use this if you need to process and understand the context of exceptionally long videos, where key information might be spread across many frames, and traditional methods struggle with memory or context limitations.
Not ideal if your primary task involves short video clips or still images, or if you require real-time processing on resource-constrained devices without extensive video memory.
Stars
27
Forks
1
Language
Python
License
MIT
Category
Last pushed
Jan 17, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/MC-ViT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jaehyunnn/ViTPose_pytorch
An unofficial implementation of ViTPose [Y. Xu et al., 2022]
UdbhavPrasad072300/Transformer-Implementations
Library - Vanilla, ViT, DeiT, BERT, GPT
tintn/vision-transformer-from-scratch
A Simplified PyTorch Implementation of Vision Transformer (ViT)
icon-lab/ResViT
Official Implementation of ResViT: Residual Vision Transformers for Multi-modal Medical Image Synthesis
gupta-abhay/pytorch-vit
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale