Xander Steenbrugge Profile picture
Independent AI researcher, digital artist, public speaker, online educator and founder of the https://t.co/BDW8z5h0Fd digital media platform.
Sep 6, 2022 β€’ 7 tweets β€’ 7 min read
I continued exploring #stablediffusion's latent space over the weekend and oh my; there's still a LOT of treasure to be discovered inside this magnificent neural universe!

Here's a quick thread with some of my personal favorites and how I found them.. ImageImageImageImage The fact that all this visual splendor is compressed in just 4Gb of neural network weights totally blows my mind. Call it compression, call it emergence, it's just 🀯🀯

Getting bored by a StyleGAN model after looking at samples for 20 minutes seems like a very distant past now.. ImageImageImageImage
Aug 17, 2022 β€’ 13 tweets β€’ 3 min read
Ok, so first of all, #stablediffusion did not come with code to make videos, so I came up with a way to interpolate between encoded prompt vectors (no worries if you don't know what that means) and thereby create video sequences from prompt sequences (1/n)
Next, I had to come up with a visual narrative that would work well with the style of the Diffusion interpolations. You can't just tell any story here: like with any medium, you have to work within the constraints of the technology. (2/n)
Aug 13, 2022 β€’ 5 tweets β€’ 3 min read
"Voyage through Time"
is my first artpiece using #stablediffusion and I am blown away with the possibilities...

We're crossing a threshold where generative AI is no longer just about novel aesthetics, but evolving into an amazing tool to build powerful, human-centered narratives This video was created using 36 consecutive phrases that define the visual narrative.

To find the best possible sequence, I tried over a thousand different prompts and seeds and applied many "prompt engineering" tricks in my code, to figure out what works and what doesn't
Apr 23, 2022 β€’ 7 tweets β€’ 3 min read
I discovered a bug in my own Diffusion + CLIP pipeline and suddenly the samples are unreal.. 🀯
Here's
"Just a liquid reality..."
#AIart #notdalle2 #Diffusion #clip Image "The magnificent portal of mother Gaia" Image
Feb 24, 2022 β€’ 10 tweets β€’ 7 min read
This is a "3D-diffusion" video created using a combination of four different AI models🀯

Welcome to the metaverse! 🌌😎

There's such incredible potential here that I want to explain how I made this, so here's a thread! (1/n) The two main models that draw the pixels are a diffusion model guided by a language prompt through @OpenAI's CLIP model.
This idea was introduced by @advadnoun and later refined by many other creatives. My talk at @Kikk_Festival further explains this:
Jan 21, 2022 β€’ 5 tweets β€’ 2 min read
Finally playing around with CLIP + diffusion models.

12 GPU hours in I gotta say I'm pretty impressed with the difference in esthetics compared to VQGANπŸ‘Œ
Big thanks to @RiversHaveWings & @Somnai_dreams for providing great starting code!

"a dystopian city" "The real problem of humanity is that we have Paleolithic emotions, medieval institutions and godlike technology"
Dec 19, 2021 β€’ 4 tweets β€’ 1 min read
Just felt like sharing some beautiful images, these are still hot from the GPU...

"The elder sphere" "The Engine"
Oct 6, 2021 β€’ 4 tweets β€’ 1 min read
Niiice! Hooking this up to CLIP as soon as the weights are released πŸ€žπŸ€žπŸ˜‹ TLDR:
1. Replaces the CNN encoder and decoder with a vision transformer β€˜ViT-VQGAN’, leading to significantly better speed-quality tradeoffs compared to CNN-VQGAN
Oct 6, 2021 β€’ 7 tweets β€’ 4 min read
Note to self: don't use default matplotlib colormaps to make digital artπŸ€¦β€β™‚οΈπŸ˜…

New samples from my 'color-quantized VQGAN' are looking great!

Here's "π‘¨π’„π’„π’π’“π’…π’Šπ’π’ˆ 𝒕𝒐 π‘Ύπ’Šπ’•π’•π’ˆπ’†π’π’”π’•π’†π’Šπ’, 𝒂 π’‘π’Šπ’„π’•π’–π’“π’† π’Šπ’” 𝒂 π’Žπ’π’…π’†π’ 𝒐𝒇 π’“π’†π’‚π’π’Šπ’•π’š"

#clip #AIart "π’Žπ’š 𝒉𝒆𝒂𝒅 π’Šπ’” 𝒇𝒖𝒍𝒍 𝒐𝒇 π’π’π’Šπ’”π’†"
Oct 5, 2021 β€’ 5 tweets β€’ 3 min read
Inspired by the amazing work of @HvnsLstAngel I've been experimenting with a "color-quantized VQGAN"
Essentially, I introduced a codebook of possible colors and apply quantization in rgb space.

It's always fascinating how removing entropy can make samples more interesting... ImageImage "Inception" ImageImage