I have a 7-color e-ink screen arriving tomorrow, so I'm experimenting with custom color dithering techniques. Here is Subscapes #39 reduced to a 7-color, 600x448px paletted image.
One thing that is worth more exploration is an error diffusion technique that isn't based on a simple scanline. On the left is left-to-right scanline, on the right first I run through with random jumps, and then apply a second pass with left-to-right, to avoid dither patterns.
Notice the very top is problematic in both. On the left, it repeats noticeably until it 'fixes' itself. On the right, the first scanline hasn't yet received any error diffuse, so it doesn't match the rest.
Something that blows my mind with dithering is that this image is just 7 colors, but putting certain colors near each other tricks our brains into seeing more than the reduced palette.
Screen arrived! Easy setup and pretty nice quality. Here is Subscapes #042 and #039 realized on the surface. The former uses just 2 colors, the latter uses all 7.
Notice in the second photo it is unplugged—the image will persist indefinitely, which is the beauty of e-ink!
The update loop is *slow* and chaotic though, around 30 seconds because of all the colors. Perhaps there's some room to improve it if I dive into their Python driver a little.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Tools like midjourney and off-the-shelf VQGAN/CLIP notebooks give us a sort of "democratization of the image," making it very easy to generate pretty-looking content with zero prior art-making experience, and almost no technical skills needed.
To demonstrate, I just now came up with a fairly primitive text prompt, and plugged it into #midjourney.
Prompt: "castles, oil painting, fantasy, scifi"
Output image, some minutes later:
"Anybody can be an artist" with these tools, crafting incredibly detailed and sometimes rather convincing oil paintings, photographs, pencil drawings, or what have you. This is great, but will likely have some profound changes to our relationship with the image.
a photographic walking tour of alter-London within the #midjourney latent space. fragmented shadows are cast by neural structures situated within an infinite multidimensional digital space; generated by a network trained on our collective photo media.
Westminster Abbey, Barbican Estate, St Paul's Cathedral, Tate Modern —
Tower of London, Palace of Westminster, St Pancreas Station Exterior & Interior —
Digital art exhibitions shouldn’t be reduced to a few small digital screens mounted hastily in an empty room (see: Christie's, Sotheby's).
So, I thought I’d start a thread showing some more interesting ways digital and screen-based work can be exhibited.
👇 (cont)
One great example of this can be found in Ryoji Ikeda’s (@ryojiikeda) work. It's often screen-based, but highly experiential, I've seen his work described as an ‘assault on the senses'. His recent solo show at 180 Strand in London was exceptional.
Another artist of note is Rafaël Rozendaal (@newrafael) who has exhibited & curated a wide range of digital and screen-based art—highlighting the scale, richness, and colour from these types of work.
With the growing interest in generative art, there are a lot of opportunists and shady platforms emerging, largely being built on open source generative art sketches that artists have released over the years.
I have had a number of opportunists attempt to remix my open source sketches and generative art code for their own profit, and to build their own art platforms (even when it is against my explicit license). This has happened for years, but is being exacerbated by recent interest.
My advice to artists: the code for your generative art is the keys to your artistic intellectual property. It's probably best not to publish it un-compressed on the internet, regardless of what license you think will protect it.