It's easy to use deep learning to generate notes that sound like music, in the same way that it's easy to generate text that looks like natural language.
But it's nearly impossible to generate *good* music that way, much like you can't generate a good 2-page story or poem
With two caveats:
1. Plagiarism. If you near-copy large chunks of a good piece, these chunks will be good.
2. Large-scale curation. If you generate thousands of samples and hand-pick the best, they may be good by happenstance (especially for music, where the space is smaller)
However, algorithms (and ML in particular) absolutely do have a role to play in music creation. What's broken is the general approach of statistical mimicry, e.g. raw deep learning.
To generate good music programmatically, you need an algorithmic model of what makes music good.
If you understand what makes music good with a sufficient level of clarity, you can express it in rules form, and seek to algorithmically maximize this greatness factor.
As usual with AI, this requires first understanding the subject matter by yourself, instead of blindly throwing a large dataset at a large model -- an approach which could only ever achieve local interpolation.
Find the model, don't just fit a curve.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Three things we've released recently that I'm extremely excited about:
1. TensorFlow Cloud: add one-line to your notebook or project to start training your model in the cloud in a distributed way. keras.io/guides/trainin…
2. Keras Preprocessing Layers: build end-to-end models that take as input raw strings or raw structured data samples. Handles string splitting, feature value indexing & encoding, image data augmentation, etc.
Facebook says fanning the flames of hate gets you more engagement, and it's ok to do it because it happened before, in the 1930s, with nothing bad coming from it
To quote @Grady_Booch: Facebook is a profoundly unethical company, and it starts at the top.
Fully aware of its own immense influence power, FB deliberately decides to use it in service of far-right radicalization, in order to create "engagement".
Honestly the take "the fact that it happened in the 1930s shows that it's part of human nature and therefore it's fine to encourage it" blows my mind.
Of course it's part of human nature. This realization is at the core of what "never again" means.
This is a strange take -- in virtually every country the center-left has been pro-lockdown and the far-right has been anti-lockdowns (the center-right is usually pro-lockdowns as well, but not as much as the center-left).
If it were stochastic there would be many exceptions.
In general, it's helpful to look at the rest of the world to understand the US, since it highlights what's unique about the US and what's just a manifestation of broader trends and general equilibria.
I think the dynamic at play here is:
"trust in expert + value human life -> pro-lockdown"
"anti-intellectualism and anti-expertise + value 'individual freedom' over human life -> anti-lockdown"
Saying that bias in AI applications is "just because of the datasets" is like saying the 2008 crisis was "just because of subprime mortgages".
Technically, it's true. But it's singling out the last link in the causality chain while ignoring the entire system around it.
Scenario: you've shipped an automated image editing feature, and your users are reporting that it treats faces very differently based on skin color. What went wrong? The dataset?
1. Why was the dataset biased in the 1st place? Bias in your product? At data collection/labeling?
2. If you dataset was biased, why did you end up using it as-is? What are your processes to screen for data bias and correct it? What biases are you watching out for?
I think it's clear that for many smaller companies that invested in deep learning, it turned out not to be essential and got cut post-Covid as part of downsizings. There are somewhat fewer people doing deep learning now than half a year ago, for the first time since at least 2010
This is evident in particular in deep learning job postings, which collapsed in the past 6 months