johndpope/Emote-hack
Emote Portrait Alive - using ai to reverse engineer code from white paper. (abandoned)
This project aims to create animated talking head videos from a single image and an audio file. It takes a static portrait and an audio recording (like a speech or song) to produce a video where the person in the portrait moves their head and mouth in sync with the sound. This is useful for content creators, marketers, or educators looking to bring still images to life with spoken content.
184 stars. No commits in the last 6 months.
Use this if you want to generate realistic, expressive talking head videos from just a picture and an audio track.
Not ideal if you need a fully polished, ready-to-use animation tool, as this is currently an incomplete and experimental developer's effort to replicate a research paper.
Stars
184
Forks
8
Language
Python
License
—
Category
Last pushed
Nov 03, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/johndpope/Emote-hack"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...