Boese0601/Dyadic-Interaction-Modeling
[ECCV 2024] Dyadic Interaction Modeling for Social Behavior Generation
This helps generate realistic human movement and expressions for a listener in a two-person conversation. You input multimodal data from a speaker (like their speech and facial movements), and it outputs a photorealistic video of how the listener would naturally react. This is useful for researchers and creators working on digital humans, virtual assistants, or social robotics.
No commits in the last 6 months.
Use this if you need to create believable, context-aware listener behaviors for virtual characters based on a speaker's actions.
Not ideal if you need to generate complex, multi-person interactions or require real-time, low-latency applications without significant computational resources.
Stars
62
Forks
6
Language
Python
License
—
Category
Last pushed
Apr 23, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/Boese0601/Dyadic-Interaction-Modeling"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Mrkomiljon/awesome-generative-ai
Multimodal generative AI resources : talking heads, STT, TTS, image & video generation, and more.
NVIDIA/Maya-ACE
Maya-ACE: A Reference Client Implementation for NVIDIA ACE Audio2Face Service
OpenVGLab/OmniLottie
[CVPR 2026🔥] 🧑🎨 OmniLottie, an open-sourced multi-modal instructed vector animation generator...
jdh-algo/JoyHallo
JoyHallo: Digital human model for Mandarin
michaelzhang-ai/Speech2Video
ACCV 2020 "Speech2Video Synthesis with 3D Skeleton Regularization and Expressive Body Poses"