[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
-
Updated
Nov 26, 2024 - Python
[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
[ECCV2024] Video Foundation Models & Data for Multimodal Understanding
[CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning on image-text and video-text tasks.
Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understanding
Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Pre-training Dataset and Benchmarks
mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)
Align and Prompt: Video-and-Language Pre-training with Entity Prompts
[NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering
SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models
[NeurIPS 2022] Zero-Shot Video Question Answering via Frozen Bidirectional Language Models
A PyTorch implementation of VIOLET
NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR'21)
[ACL 2020] PyTorch code for TVQA+: Spatio-Temporal Grounding for Video Question Answering
[NeurIPS 2022 Spotlight] Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations
[ICCV 2021 Oral + TPAMI] Just Ask: Learning to Answer Questions from Millions of Narrated Videos
[CVPR 2023 Highlight] Video-Text as Game Players: Hierarchical Banzhaf Interaction for Cross-Modal Representation Learning
A new multi-shot video understanding benchmark Shot2Story with comprehensive video summaries and detailed shot-level captions.
Large Language Models are Temporal and Causal Reasoners for Video Question Answering (EMNLP 2023)
Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)
[CVPR 2022] A large-scale public benchmark dataset for video question-answering, especially about evidence and commonsense reasoning. The code used in our paper "From Representation to Reasoning: Towards both Evidence and Commonsense Reasoning for Video Question-Answering", CVPR2022.
Add a description, image, and links to the video-question-answering topic page so that developers can more easily learn about it.
To associate your repository with the video-question-answering topic, visit your repo's landing page and select "manage topics."