HLTCHKUST/VG-GPLMs
The code repository for EMNLP 2021 paper "Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization".
This project helps researchers and academics working with large collections of online videos and their transcripts to automatically generate concise summaries. You provide videos and their corresponding text transcripts, and it produces a brief, coherent summary that captures the essential information from both modalities. This tool is ideal for anyone analyzing or abstracting insights from multimodal content, such as media analysts or content researchers.
No commits in the last 6 months.
Use this if you need to quickly create abstractive summaries from videos and their associated transcripts, leveraging the combined power of visual and textual information.
Not ideal if you only have text or video data, or if you need extractive summaries that directly pull sentences from the original transcript rather than generating new ones.
Stars
57
Forks
9
Language
Python
License
—
Category
Last pushed
Jan 14, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/HLTCHKUST/VG-GPLMs"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kyegomez/RT-X
Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment:...
kyegomez/PALI3
Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"
chuanyangjin/MMToM-QA
[🏆Outstanding Paper Award at ACL 2024] MMToM-QA: Multimodal Theory of Mind Question Answering
lyuchenyang/Macaw-LLM
Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration
Muennighoff/vilio
🥶Vilio: State-of-the-art VL models in PyTorch & PaddlePaddle