ChatdollKit and Virtual-Human-for-Chatting
These are complements: ChatdollKit provides the chatbot framework and conversation logic, while Virtual-Human-for-Chatting supplies the 2D character animation layer, and both can be integrated together to create a fully embodied conversational agent.
About ChatdollKit
uezo/ChatdollKit
ChatdollKit enables you to make your 3D model into a chatbot
This helps you bring your 3D digital characters to life as interactive, voice-enabled chatbots. You input your 3D model and choose a large language model (like ChatGPT or Gemini), and the output is a character that can converse naturally, listen, speak, and express emotions. Virtual assistant developers, content creators, or anyone building interactive virtual agents for applications would use this.
About Virtual-Human-for-Chatting
Navi-Studio/Virtual-Human-for-Chatting
Live2D Virtual Human for Chatting based on Unity
This project helps streamers, content creators, and educators engage their audience by enabling a Live2D virtual human avatar to chat live. It takes your voice and text inputs, processes them using AI services, and outputs a talking, expressive avatar that responds in real-time. This is for anyone who wants to add a dynamic, animated character to their live streams, virtual presentations, or interactive content without needing to appear on camera themselves.
Scores updated daily from GitHub, PyPI, and npm data. How scores work