Awesome-Video-Diffusion and Awesome-Controllable-T2I-Diffusion-Models
The two tools are **ecosystem siblings**: the first tool provides a broader curated list of video diffusion models, while the second offers a more focused collection of resources specifically on controllable generation with text-to-image diffusion models, which could be a component or prerequisite for achieving certain video diffusion tasks.
About Awesome-Video-Diffusion
showlab/Awesome-Video-Diffusion
A curated list of recent diffusion models for video generation, editing, and various other applications.
This is a curated list of tools and resources for generating and editing videos using AI. It helps video creators, marketers, and content producers find different methods to create videos from scratch, modify existing footage, or enhance video quality. You can input text, images, or existing video clips to generate new scenes, apply artistic styles, or restore old videos.
About Awesome-Controllable-T2I-Diffusion-Models
PRIV-Creation/Awesome-Controllable-T2I-Diffusion-Models
A collection of resources on controllable generation with text-to-image diffusion models.
This resource collects information on how to guide text-to-image diffusion models to generate images that match specific requirements. It brings together methods to create images with precise control over details like subjects, styles, or spatial arrangements, based on your text input and other conditions. It's for anyone—like digital artists, marketers, or designers—who needs to generate highly customized images for creative projects or product visualization.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work