It's easy to use deep learning to generate notes that sound like music, in the same way that it's easy to generate text that looks like natural language.

But it's nearly impossible to generate *good* music that way, much like you can't generate a good 2-page story or poem
With two caveats:

1. Plagiarism. If you near-copy large chunks of a good piece, these chunks will be good.

2. Large-scale curation. If you generate thousands of samples and hand-pick the best, they may be good by happenstance (especially for music, where the space is smaller)
However, algorithms (and ML in particular) absolutely do have a role to play in music creation. What's broken is the general approach of statistical mimicry, e.g. raw deep learning.

To generate good music programmatically, you need an algorithmic model of what makes music good.
If you understand what makes music good with a sufficient level of clarity, you can express it in rules form, and seek to algorithmically maximize this greatness factor.
As usual with AI, this requires first understanding the subject matter by yourself, instead of blindly throwing a large dataset at a large model -- an approach which could only ever achieve local interpolation.

Find the model, don't just fit a curve.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with François Chollet

François Chollet Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @fchollet

10 Oct
Factors of success of a system over different timescales

Short term: execution
Medium term: strategy
Long term: meta (make a better system)
For you as an individual, long-term success thus depends on learning and exploration
And a system that never changes cannot be successful over an indefinite timescale
Read 4 tweets
9 Oct
Three things we've released recently that I'm extremely excited about:

1. TensorFlow Cloud: add one-line to your notebook or project to start training your model in the cloud in a distributed way.
keras.io/guides/trainin…
2. Keras Preprocessing Layers: build end-to-end models that take as input raw strings or raw structured data samples. Handles string splitting, feature value indexing & encoding, image data augmentation, etc.

keras.io/guides/preproc…
3. KerasTuner: flexible and efficient hyperparameter tuning for Keras models.

tensorflow.org/tutorials/kera…
Read 4 tweets
26 Sep
Facebook says fanning the flames of hate gets you more engagement, and it's ok to do it because it happened before, in the 1930s, with nothing bad coming from it
To quote @Grady_Booch: Facebook is a profoundly unethical company, and it starts at the top.

Fully aware of its own immense influence power, FB deliberately decides to use it in service of far-right radicalization, in order to create "engagement".
Honestly the take "the fact that it happened in the 1930s shows that it's part of human nature and therefore it's fine to encourage it" blows my mind.

Of course it's part of human nature. This realization is at the core of what "never again" means.
Read 5 tweets
21 Sep
This is a strange take -- in virtually every country the center-left has been pro-lockdown and the far-right has been anti-lockdowns (the center-right is usually pro-lockdowns as well, but not as much as the center-left).

If it were stochastic there would be many exceptions.
In general, it's helpful to look at the rest of the world to understand the US, since it highlights what's unique about the US and what's just a manifestation of broader trends and general equilibria.
I think the dynamic at play here is:
"trust in expert + value human life -> pro-lockdown"
"anti-intellectualism and anti-expertise + value 'individual freedom' over human life -> anti-lockdown"
Read 4 tweets
20 Sep
Saying that bias in AI applications is "just because of the datasets" is like saying the 2008 crisis was "just because of subprime mortgages".

Technically, it's true. But it's singling out the last link in the causality chain while ignoring the entire system around it.
Scenario: you've shipped an automated image editing feature, and your users are reporting that it treats faces very differently based on skin color. What went wrong? The dataset?

1. Why was the dataset biased in the 1st place? Bias in your product? At data collection/labeling?
2. If you dataset was biased, why did you end up using it as-is? What are your processes to screen for data bias and correct it? What biases are you watching out for?
Read 5 tweets
30 Aug
I think it's clear that for many smaller companies that invested in deep learning, it turned out not to be essential and got cut post-Covid as part of downsizings. There are somewhat fewer people doing deep learning now than half a year ago, for the first time since at least 2010
This is evident in particular in deep learning job postings, which collapsed in the past 6 months
The deep learning recession indicator...
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!