tae898/multimodal-datasets

Multimodal datasets.

40
/ 100
Emerging

This project helps researchers and scientists working with human interaction data by organizing diverse datasets containing video, audio, and text from conversations and TV shows. It takes raw multimodal data (like video files) and processes them into structured formats, extracting features such as facial expressions, vocal characteristics, and text embeddings. Researchers studying human behavior, emotion, or communication can use this to prepare their data for analysis or model training.

No commits in the last 6 months.

Use this if you need to standardize and preprocess complex multimodal datasets like MELD or IEMOCAP for research into human interaction, emotional AI, or conversational agents.

Not ideal if you're looking for new, unprocessed datasets or if your primary focus is on single-modality data (e.g., only text analysis).

human-computer-interaction social-robotics computational-linguistics emotion-recognition multimodal-analytics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

34

Forks

9

Language

Python

License

MIT

Last pushed

Jan 26, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/tae898/multimodal-datasets"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.