Stability AI Profile picture
We are building the foundation to activate humanity's potential.
Jerome Ku Profile picture 1 subscribed
Dec 7, 2022 7 tweets 4 min read
We’re happy to release Stable Diffusion, Version 2.1!

With new ways of prompting, 2.0 provided fantastic results, and 2.1 supports the new prompting style, but also brings back many of the old prompts too!

Link → stability.ai/blog/stabledif… Stable Diffusion v2.1-768 Credit: KaliYuga_ai The differences are more data, more training, and a less restrictive filtering of the dataset. The dataset for v2.0 was filtered aggressively LAION’s NSFW filter, making it a bit harder to get similar results generating people.
Dec 6, 2022 5 tweets 3 min read
The leading architecture magazine @Dezeen interviewed Stability AI’s Creative Director Bill Cusick @williamcusick about the uses of AI image generation in architecture, here are the highlights:

“AI is the foundation for the future of creativity."

Link → dezeen.com/2022/11/16/ai-… AI is enabling new forms of creativity for architectural design. Cusick likens designing with AI to the experience of playing chess, in that it takes a short amount of time to learn but far longer to master.
Nov 24, 2022 9 tweets 4 min read
We are excited to announce the release of Stable Diffusion Version 2!

Stable Diffusion V1 changed the nature of open source AI & spawned hundreds of other innovations all over the world. We hope V2 also provides many new possibilities!

Link → stability.ai/blog/stable-di… Text-to-Image

The V2.0 release includes robust text-to-image models trained using a new text encoder developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases.
Nov 10, 2022 5 tweets 2 min read
Our very own @RiversHaveWings has trained a latent diffusion-based upscaler.

What does this mean and how does it work? (1/5) The upscaler is itself also a diffusion model. It was trained on a high-resolution subset of the LAION-2B dataset. Being a 2x upscaler, it can take the usual 512x512 images obtained from Stable Diffusion and upscale it to 1024x1024. (2/5)
Oct 29, 2022 8 tweets 4 min read
We recently released two new fine-tuned decoders (trained by the amazing @rivershavewings) that improves the quality of images generated by @StableDiffusion.

Read on to see what this means and how you can try it out yourself! ↓ What does "fine-tuned decoders" even mean? Well, basically, Stable Diffusion actually is a diffusion model that operates in a compressed space, which is then "decoded" into a full-resolution image. This decoder itself is also a trained neural network.