A small thread on analysing text rendering capability of #imagen.
Starting with a simple example, Imagen is able to reliably render text without any need for rigorous cherrypicking. e.g. these are non-cherry picked examples for writing "Text-to-Image" on a storefront.
Imagen is also capable of writing text in a variety of
interesting settings. Here are some examples (1 sample picked out of 8 for each prompt).
Some more examples of interesting settings...
As the text gets longer, it naturally becomes more and more difficult to render correctly, but Imagen is pretty good for complex text prompts.
"Text", "Text-to-Image", "Text-to-Image Diffusion", "Text-to-Image Diffusion Models"
The images get worse with very long text renderings, e.g. "Photorealistic Text-to-Image Diffusion Models"
We are still able to find good samples within 8 random generations.
Interestingly, the model is able to write non meaningful combination of letters pretty reliably too. E.g. "magie-xett-ot" (shuffled text-to-image)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
We are thrilled to announce Imagen, a text-to-image model with unprecedented photorealism and deep language understanding. Explore imagen.research.google and Imagen!
A large rusted ship stuck in a frozen lake. Snowy mountains and beautiful sunset in the background. #imagen
A plush toy koala bear relaxing on a lounge chair and working on a laptop. The chair is beside a rose flower pot. There is a window on the wall beside the flower pot with a view of snowy mountains. #imagen
Imagen uses a large pre-trained language model (T5-XXL) as a text encoder, and a cascade of diffusion models for 1024x1024 image generation. Imagen outperforms all existing techniques on MS-COCO benchmark by a considerable margin.