agentic-learning-ai-lab/lifelong-memory

Code for LifelongMemory: Leveraging LLMs for Answering Queries in Long-form Egocentric Videos

29
/ 100
Experimental

This tool helps you quickly find specific moments and answer questions about actions captured in long, first-person video recordings. You provide your egocentric video footage (like from a bodycam) and your questions in natural language, and it outputs precise answers or timestamps for relevant events. It's designed for researchers or analysts who need to efficiently review extensive subjective video data.

Use this if you need to extract specific information or answer questions from many hours of first-person video content.

Not ideal if your videos are not egocentric (first-person perspective) or if you need to process short-form, general video content.

video-analysis first-person-video activity-recognition qualitative-research behavioral-studies
No Package No Dependents
Maintenance 6 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

28

Forks

Language

Python

License

MIT

Last pushed

Oct 27, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/agentic-learning-ai-lab/lifelong-memory"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.