kohjingyu/gill
🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".
This tool helps creative professionals and content creators easily produce new images or find existing ones using natural language descriptions, even when combining images and text in their prompts. You provide text descriptions or a mix of text and images, and it gives you either a newly generated image that fits your description or a relevant image retrieved from a large collection. It's designed for anyone who needs visual content quickly based on detailed, multi-modal input.
471 stars. No commits in the last 6 months.
Use this if you need to generate unique images or find existing ones by describing them in detail, especially when your ideas blend both text and visual elements.
Not ideal if you're looking for a simple keyword search image tool or if your primary need is strictly text-to-text generation.
Stars
471
Forks
38
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Jan 19, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kohjingyu/gill"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
NVlabs/OmniVinci
OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.
fixie-ai/ultravox
A fast multimodal LLM for real-time voice