ComfyUI_VLM_nodes and ComfyUI-ExLlama-Nodes
These are complementary tools: one provides Vision Language Model and creative generation nodes while the other supplies an efficient large language model inference backend, allowing users to integrate both multimodal AI capabilities and optimized LLM processing within the same ComfyUI workflow.
About ComfyUI_VLM_nodes
gokayfem/ComfyUI_VLM_nodes
Custom ComfyUI nodes for Vision Language Models, Large Language Models, Image to Music, Text to Music, Consistent and Random Creative Prompt Generation
This project offers tools within ComfyUI to let creative professionals, artists, or marketers easily generate music from images or text, and create detailed prompts for AI art. You can input an image to get music, or provide keywords/descriptions to generate consistent or creative text prompts. It's designed for anyone working with visual or textual content who wants to explore generative AI for new creative outputs or content variations.
About ComfyUI-ExLlama-Nodes
Zuellni/ComfyUI-ExLlama-Nodes
ExLlamaV2 nodes for ComfyUI.
This tool helps creative professionals or AI enthusiasts generate text locally on their computer using powerful language models. You provide a prompt or a conversation history, and it outputs newly generated text. It's designed for users who want fine-grained control over text generation within a visual node-based workflow.
Scores updated daily from GitHub, PyPI, and npm data. How scores work