simonsanvil/DALL-E-Explained

Description and applications of OpenAI's paper about DALL-E (2021) and implementation of other (CLIP-guided) zero-shot text-to-image generation schemes

34
/ 100
Emerging

Generate unique images from text descriptions without needing extensive training data for each new concept. You input a text prompt describing an image, and the system creates a corresponding visual output. This tool is for artists, designers, marketers, or anyone needing custom visual content quickly from simple text instructions.

No commits in the last 6 months.

Use this if you need to rapidly create diverse images from textual ideas, like visualizing a product concept or generating creative art.

Not ideal if you require pixel-perfect control over every detail of the generated image or need to work with a fixed set of predefined image categories.

generative-art concept-visualization digital-marketing content-creation image-synthesis
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

33

Forks

4

Language

Jupyter Notebook

License

MIT

Last pushed

Aug 11, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/simonsanvil/DALL-E-Explained"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.