I don't think History will remember someone like Lecun as a convnet pioneer. He will be remembered as the man who was in charge of AI at Facebook during the decade when Facebook's algorithms weakened democracy worldwide and triggered a global wave of far-right populism.
Which is far more momentous and historically significant.
I'm sure there are many people at FB that are well-intentioned. I'm sure Lecun has good intentions. But what you believe in is irrelevant if your actions don't match your values.
And if you're in a position of great power, there are no excuses anymore.
All tech that affects people's lives has an ethical dimension. If we build it, we have to be aware of how our decisions shape the direction of this impact.
Who we choose to work for matters. Who we choose to support matters. Especially for those of us in leadership positions.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Empathy is good. Calls for unity are good. But let's stop pretending that the "two sides" are symmetrical and morally equivalent. Stop saying, "oh, you think X is bad, but the other side just thinks the same about you".
Along every dimension, the picture is incredibly lopsided.
Maybe the most glaring asymmetry is what was at stake in this election. Biden won, and Trump supporters have nothing to fear from this result. In policy terms, they'll actually end up better off.
Had Trump won, many people would have definitely not been ok. Lives were at stake.
And no, CNN & NYT are not equivalent to Fox News.
Conspiracy theories against Biden's family are not equivalent to Trump's actual corruption.
Antiracism activists are not equivalent to far-right militias.
Life-saving medical facts are not equivalent to life-endangering lies.
Plenty of ballots left to count (and remember, things will get bluer over time as mail-in ballots get tallied). But I hoped we'd see a clear winner tonight, and the fact that it's looking unlikely is disappointing beyond words
It's easy to use deep learning to generate notes that sound like music, in the same way that it's easy to generate text that looks like natural language.
But it's nearly impossible to generate *good* music that way, much like you can't generate a good 2-page story or poem
With two caveats:
1. Plagiarism. If you near-copy large chunks of a good piece, these chunks will be good.
2. Large-scale curation. If you generate thousands of samples and hand-pick the best, they may be good by happenstance (especially for music, where the space is smaller)
However, algorithms (and ML in particular) absolutely do have a role to play in music creation. What's broken is the general approach of statistical mimicry, e.g. raw deep learning.
To generate good music programmatically, you need an algorithmic model of what makes music good.
Three things we've released recently that I'm extremely excited about:
1. TensorFlow Cloud: add one-line to your notebook or project to start training your model in the cloud in a distributed way. keras.io/guides/trainin…
2. Keras Preprocessing Layers: build end-to-end models that take as input raw strings or raw structured data samples. Handles string splitting, feature value indexing & encoding, image data augmentation, etc.