I'm still intrigued by the recent wave of »fake AI films«: stills from movies that never existed, generated with Midjourney, Stable Diffusion, etc. This space opera, which David Cronenberg never directed, may be the most popular (and most despised), but there are many others 1/15
For me, these »films that don't exist« are more than just a nice gimmick; the intense reactions they evoke tell us something about the cultural moment we live in and our relationship to the images of the past 2/15 nytimes.com/interactive/20…
Why does it seem more attractive to use AI to conjure up movies from an alternate past than to imagine future movies? And why do people get so upset about these mash-ups that they even threaten those who produce them? 3/15 buzzfeednews.com/article/chriss…
In a way, all these »movies that never existed« resemble what Hollywood calls »high-concept movies,« movies whose premise can be summed up and pitched in a just few words, perhaps even in the title alone (think »Snakes on a Plane« or »Cocaine Bear«) 4/15
In the case of fake AI movies, such as this never-existent Soviet version of »Home Alone«, the high-concept almost always operates according to a logic of »what if?« – what if a popular movie had been made by a famous auteur and/or in a different era? 5/15
This »what if?« logic is based on a structural combinatorics, it is a logic of language, and it underlies what one might call a »grammar« of AI prompts: The production logic of txt2img generation is very much rooted in the free recombinability of semantic concepts 6/15
In this respect, fake AI films like »Jodorowsky's Tron« are exemplary products of AI-generated imagery, not unlike DALL-E's iconic otter in the style of Vermeer, as both use the combinatorial logic of the prompt to attach historical »styles« to previously unrelated subjects 7/15
Thus, there's an elective affinity between high-concept, AI prompts, and internet memes also – they all use an already existing pool or archive of familiar and recognizable tropes, clichés, and stereotypes, and remix and reshuffle them to surprising, even comical effect 8/15
But unlike Vermeer's »Otter with a Pearl Earring,« David Cronenberg’s equally non-existent »Galaxy of Flesh« was met not with mild amusement, but with quite a bit of hostility. Why is that? My guess: it has to do with nostalgia 9/15
It's no coincidence that almost all these non-existent films, here Kurosawa's »Staru Waru« (1972), are said to have been produced in the 1970s and 80s, the era before CGI: many of them are digital hallucinations of analog visual effects 10/15 japanization.org/staru-waru-ref…
In more than one way, these retro movie mash-ups evoke a nostalgia for pre-digital times, a symptom of what Simon Reynolds has once called »retromania«. At the same time, however, their means of production are decidedly anti-nostalgic 11/15
Nostalgia is all about loss, it's a longing for a past that is ultimately irretrievable. Thus the fetishised objects of nostalgia are those cultural relics that keep alive some traces of that lost past. Such relics are by definition rare and all the more precious 12/15
DALL-E, Midjourney, Stable Diffusion, etc., on the other hand, produce images that are neither rare nor precious: an inflation of quasi-nostalgic imagery, turning the irretrievable past into a retrievable resource, endlessly remixable 13/15
And, at least that's what I think, those who are offended by these fake AI films experience them as a phony simulation of something in which they are looking for authenticity, as a digital devaluation of a pre-digital past for which they feel deeply nostalgic 14/15
And although I personally don't feel that way, it seems hard to argue with that 15/15
(I'm still not sure I understand both the fascination and the hostility, and I need to think more about it, but I'm very indebted to all those with whom I've been able to discuss these things on the 54 books discord, especially @jbirken, @beritmiriam, @Johannes42 and @pookerman)
In a wonderful talk about generative AI and fan art, @nicolleness just shared this example of a fake AI movie: Star Wars, reimagined as Tarkovsky films. And now, I have to admit, I share some of the resentment: this doesn't look like Tarkovsky at all! shvedcreative.com/fan-art
Strange thing though: On the website itself, the name Tarkovsky is not mentioned any longer, now it's presented as mere Star Wars fan art project
PPS: @keithscho points to an interesting aspect: The visual artifacts resulting from video transfer etc. may not only distract you from the artificiality of the images, they may also make you fill up the gaps with your own imagination #ArtificialNostalgia
If the default #PlatformRealism of AI image synthesis tools can essentially be described as a second-order aesthetic of generic images, it's particularly revealing what #Midjourney does when asked to generate the image of something specific, say a famous building. A thread … 1/6
You' probably recognized the building depicted above as New York's Guggenheim Museum. However, it's far from an accurate representation of Frank Lloyd Wright's famous design. It’s faithful only in the most recognizable features, while all details are treated quite generously 2/6
This seems true to most images in this thread: What they depict is less a specific building than its reproducible cliché. However, this transformation took place long before AI: The more images of a famous landmark circulate, the more it has already become a generic icon 3/6
With each update, tools like #Midjourney promise us more and more »realistic« representations – but the »reality« these images represent has little to do with the one we live in. Rather, they are best described as #PlatformRealism: a second-order aesthetic of generic images 1/9
In an age of networked online content, generic images are ubiquitous. No online text, no web page, and hardly any social media post seems complete without at least one accompanying image, even if it provides no additional information (of course, this thread also has images) 2/9
Such images are not mere illustrations, but attractors supposed to make content more visible, shareable & likable. Mostly redundant text-image combinations are the default format of online visuality; content management systems even expect you to provide images for each entry 3/9
Recently, #Midjourney introduced a new parameter called »weird«, which aims to make results more »unexpected«. This is notable for a several reasons, not least because it highlights what the company considers »expected« and thus »normal«: images like this one, for example 1/6
According to MJ, the image above, depicting an all-white 1950s nuclear »family enjoying a picnic,« represents the degree zero of »weirdness«. Pump up the algorithmically generated »weirdness« to 50, and the nostalgic vibe goes down a notch, but whiteness remains the default 2/6
Next Step: »Weirdness« at 100 – some funny things going on here, but we're still in a white middle-class dream world, just a little bit quirkier. Increased »weirdness« doesn't seem to affect MJ's ideological baseline so much as it allows for less conventional compositions 3/6
Diffusion models like Midjourney have been marketed primarily as a cheap way to produce images. And that's a problem, because in many cases they are more a means of re-production that exploits and devalues human labor. But what if we use them as tools to study images? 1/9
What's most troubling about these models from a creative viewpoint seems to be their most interesting aspect from a scholarly perspective: they are extremely good at identifying, synthesizing, and reinforcing visual patterns and stereotypes. They're basically cliché detectors 2/9
This, I'd argue, makes them potentially very powerful tools for art history. Since the early 1900s, the days of Warburg, Wölfflin, and Riegl, art history has been interested not only in the grand narrative of masterpieces, but also in the anonymous patterns of visual culture 3/9
So far, I've largely stayed out of the debates about whether or not AI can produce art - for me, that's just not the most interesting question about AI image generation. But as the discussion has progressed, I've developed some thoughts that I'd like to share in this thread
1/14
There’s a simple answer to that question: AI cannot produce art, but of course it can be used to produce art – like (almost) anything else. Since Duchamp, Kaprow, and Sturtevant, anything can become art: a ready-made object, a social event, even a copy of someone else's work
2/14
Art in this sense is not about producing pretty pictures, it's a self-reflexive cultural practice and as such presupposes an intellectual understanding of art that machines simply don't have (and may never have). AI thus in no way challenges such a conceptual notion of art
3/14
The recent wave of pope-related AI images, and the accompanying hot takes about whether or not we've now finally left an era of »visual truth« made me think about the relationship between two modes of online image interpretation: #WildForensis and #InstantMemeification 1/9
Popular versions of image forensics have been a staple of social media for some time: People just love to speculate about whether or not a widely shared image has been manipulated, and to look for hidden clues of tampering. That’s what I call #WildForensis
2/9
AI images in their current form are a perfect object of such #WildForensis: Clues that an image was generated are now often so subtle that they're only visible at second glance. But you still don't need any technical skills to find them, they are usually hidden in plain sight
3/9