arghyasur1991/LiveTalk-Unity

LiveTalk is a unified, high-performance talking head generation system that combines the power of LivePortrait and MuseTalk open-source repositories. The PyTorch models from these projects have been ported to ONNX format and optimized for CoreML to enable efficient on-device inference in Unity.

45
/ 100
Emerging

This tool helps game developers, content creators, and educators bring characters to life with realistic talking animations. By taking a static character image and an audio input (either spoken word or generated speech), it creates a dynamically animated talking head, complete with lip-sync and facial expressions. The output is a real-time character animation that can be integrated directly into Unity projects for games, virtual assistants, or interactive stories.

Use this if you need to create interactive, expressive talking avatars or NPCs in a Unity application and require efficient, on-device processing.

Not ideal if you are looking for a cloud-based solution for video generation or if your project is not built within the Unity engine.

game-development character-animation virtual-assistants interactive-storytelling content-creation
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 13 / 25

How are scores calculated?

Stars

25

Forks

4

Language

C#

License

MIT

Last pushed

Jan 15, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/arghyasur1991/LiveTalk-Unity"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.