Our GPU organ plays music acoustically by controlling the RPM of each fan
The Call is now open to the public until February @SerpentineUK in Hyde Park London
It plays along with music generated from a diffusion model that creates new songs based on the (volunteered) training data of choirs from across the UK
It holds a symbolic AI score generating model encased in brass, that can generate infinite scores for it to play
We made a songbook (designed by @_MichaelOswell_ and @casedeclined) specifically for AI training.
If you sing all the songs in it you will have fed a model every phoneme in the English language
We took this book on tour to record 15 choirs across the UK to produce the dataset
The dataset is now the largest available to researchers, future proofed by recording each choir ambisonically to provide precise recall of the sound in each room
This dataset is owned by the choirs in question through a new IP structure we created alongside @SerpentineUK Future Art Ecosystems team to allow for common ownership of AI data
@_MichaelOswell_ @casedeclined @SerpentineUK It was used to train a new polyphonic call and response model, developed with @Ircam, that allows you to sing to the model and receive a response back
The voices, and our own archives of recorded music, were also used to train a diffusion model to make entirely new, emergent, songs that intermittently fill the space (infinite thanks to @zqevans and @cortexelation of @StabilityAI)
We fed generated songs back to the model recursively to prompt it to harmonize with itself, so the songs are played back in multichannel, like a choir 🤯
“The Call” is open to the public now in London @SerpentineUK
Infinite thanks to @eva__jaeger @Ruthywaters @kayhannahwatson Vi Trinh @HUObrist Bettina Korek, Liz Stumpf, Richard Install, Zsuzsa Benke, sub, FAE, Ian Berman, Andrew Roberts, @algomus @Ircam @zqevans @cortexelation @_MichaelOswell_ @casedeclined @1OF1_art @fellowshiptrust and the many choirs who contributed their time and voices to this effort ❤️
The guiding principle of this project was to find the beauty in contributing to something greater than the sum of its parts, and nothing truer could be said about the genesis of this show. A monumental collective effort.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
@spawning_ when dealing with AI models, familiar concepts like sampling/appropriation are can be a guide, but are ultimately insufficient. We cannot approach a new problem in the same way.
Ideally we do not want punitive DRM, or carte blanche appropriation of artwork from living artists
@spawning_ one thing that is difficult is that identities and opinions formed around older IP wars are also only helpful to a point. Fully open source positions do not work. Fully restrictive IP positions also do not work. We need new concepts and stories for how to manage a new terrain.
We've started an organization (@spawning_) to help artists navigate this new AI era
It will be chaotic and emotional, but ultimately I think that non AI artists need options to see themselves thriving in (and not steamrollered by) the AI art economy
@spawning_ I spent 2016-2019 trying to reconcile the challenges raised by AI generation systems with my own art, and gave countless interviews insisting this moment was coming quickly. It is here, to stay!
@spawning_ it feels like the only way to play more of a role in shaping this future is to start an organization and lead by doing.
DALL-E is not a scam, but is a new substrate that requires new approaches to IP (see Holly+)
2016-2019 I tried to alert everyone to the importance of rethinking attribution and payment habits re: AIs trained on artist data. I hope it is clear now why!
PROTO was composed in parallel to my doctoral thesis written on vocal IP and AI, and for the album I made efforts to only train systems on people who consented, from ensemble members to large public training performances like this one
my goal was to create an understanding that it is most useful to view this category of AI as aggregate intelligence trained on the contributions of many!
I think the resulting generations are even more impressive when viewed from this more accurate perspective
I thought I would share it here, and our thesis on Spawning 🧫 and Identity Play 👩❤️👨
I started by showing the production work I contributed to @mariaarnaldimas and Marcel Bagés album 'Clamor', where I timbre transferred Maria's voice into various instruments using @GoogleMagenta powerful DDSP.
The ornamental singing style really demonstrates the detail! 🎻
I then shared the first Holly+ tool developed with @HeardSounds, who created a way to perform polyphonic timbre transfer(!!!). We converted Maria's full instrumental into music sung by my processed voice.
I’m working with @ourZORA to create a place where people can own artworks made with my voice 🦾
and will be using the @OpenLawOfficial Tribute DAO framework so friends and family can vote on official usage, and how to use the proceeds to fund new tools to share with everyone 🛠