sandipan211/LoCATe-GAT
Official PyTorch implementation of the IEEE TETCI 2024 paper LoCATe-GAT
This project helps video analysts automatically recognize actions in video footage, even if those actions haven't been explicitly seen before. You provide video datasets, and the system outputs accurate classifications of various activities. It's designed for researchers or practitioners working with large video collections who need to identify diverse actions efficiently.
No commits in the last 6 months.
Use this if you need to classify a wide range of actions in videos, including those that are new or rare, without extensive re-training for each new action type.
Not ideal if you're only interested in recognizing a small, fixed set of well-known actions and have abundant training data for them.
Stars
7
Forks
—
Language
Python
License
MIT
Last pushed
Apr 27, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/sandipan211/LoCATe-GAT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jessevig/bertviz
BertViz: Visualize Attention in Transformer Models
inseq-team/inseq
Interpretability for sequence generation models 🐛 🔍
EleutherAI/knowledge-neurons
A library for finding knowledge neurons in pretrained transformer models.
hila-chefer/Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for...
cdpierse/transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model...