Sentimentio Profile picture
Dynamic NFT collections capture people’s feelings 🪄 They/Them Pillars of Tomorrow: SOLD OUT https://t.co/xtYCnseXY1

Sep 22, 2022, 14 tweets

How do we express people's emotions about Tomorrowland through generative music?

Which musical aspects are self-produced and which ones are randomly generated by the algorithm?

How do happiness, anger, fear, and sadness sound in Pillars of Tomorrow?

All the answers below👇🧵🎶

In this generative piece, there are 4 instruments: bass, piano, percussion (hi-hat, kick, snare, claps, fx sounds), and a quiet pad.

All the rhythms are self-written using Ableton, exported, and randomly selected to create the backbone of our generative music NFT.

This approach was chosen because generative harmony is more euphonic compared to generative rhythm.

It's easier to control how something sounds(generative harmony) compared to adding randomness to when sounds are triggered(generative rhythm).

In music theory, there are rules that define what will sound good or not.

When followed, they enable even a visually impaired or a deaf music composer to produce a masterpiece.

So, we’ve chosen to express emotions through generative harmony.

This music piece has 4 different parts (basic, drop, break, after-drop) which are looped 3 times.

All parts include different rhythm variations which are randomly chosen.

The generative harmony is present in the basic and after-drop parts.

1/ Basic: basic contains bass, piano, percussion, and pad (generative harmony)

2/ Drop: the drop is the section where the snare ramps until the break (randomly chosen part with no generative harmony)

3/ Break: A music rest after the snare loads (the part with randomly chosen effect)

4/ After-drop: the after-drop part contains pure bass and percussion

So how do we express emotions in the different parts of the music?

We analyzed data from Twitter, and extracted emotions conveying happiness, anger, sadness, and fear, using machine learning that ranges from 0 to 1.

The key to generating harmony is using the emotion percentages as possibilities for the next piano chord and melody to sound happy or sad.

For example, our data shows that the percentage of happiness was 38%, and the percentage of sadness was 26%

This means that, in each part, each time a chord is strummed, there is a 38% chance for this chord to sound happy and a 26% chance to sound sad.

Which emotions are present in which parts?

Basic Part - Happiness and sadness or fear and sadness. Most of the time happiness and sadness are present, but if randomly chosen, only fear and sadness will be displayed in one of the three basic parts.

After-drop – Mostly anger, sometimes happiness, and sadness

We chose to distort our bass sound according to anger trait values, as distortion can turn a mild bass into an aggressive one.

Some mints may display happiness and sadness in the after-drop parts.

That's a wrap, that was a long and complicated thread but if you are interested in the methodology and the concept of generative music, come to @bubits_ Twitter Space next Saturday at 6:00 p.m CET to learn more and ask questions 💜

#3js
#generativeart
#MusicNFTs
#CodingIsArt

@bubits_ @Astalavista7327 I would love to hear your thoughts 💜

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling