- The Keras mixed precision API moves to stable
- Multi-worker mirrored distribution support moves to stable
- Experimental support for parameter server distribution in Keras
- Full NumPy API implementation (tf.experimental.numpy)
The territory, the map, and the cartographer each belong in entirely different categories. Yet, we tend to confuse them with each other
The thing, the abstraction that captures an actionable aspect of the thing, and the intelligence that is capable of generating new abstractions
In particular, don't confuse a model and the ability to produce new models. Humans excel at operationalizing the abstractions they invent into highly effective machines, but so far our machines have close to zero ability to generate abstractions of their own
A road takes you from A to B, but a road-building company takes you from anywhere to anywhere else.
An operationalized abstraction solves the task for which it was designed, but intelligence -- the ability to produce abstractions -- can solve arbitrary tasks
I don't think History will remember someone like Lecun as a convnet pioneer. He will be remembered as the man who was in charge of AI at Facebook during the decade when Facebook's algorithms weakened democracy worldwide and triggered a global wave of far-right populism.
Empathy is good. Calls for unity are good. But let's stop pretending that the "two sides" are symmetrical and morally equivalent. Stop saying, "oh, you think X is bad, but the other side just thinks the same about you".
Along every dimension, the picture is incredibly lopsided.
Maybe the most glaring asymmetry is what was at stake in this election. Biden won, and Trump supporters have nothing to fear from this result. In policy terms, they'll actually end up better off.
Had Trump won, many people would have definitely not been ok. Lives were at stake.
And no, CNN & NYT are not equivalent to Fox News.
Conspiracy theories against Biden's family are not equivalent to Trump's actual corruption.
Antiracism activists are not equivalent to far-right militias.
Life-saving medical facts are not equivalent to life-endangering lies.
Plenty of ballots left to count (and remember, things will get bluer over time as mail-in ballots get tallied). But I hoped we'd see a clear winner tonight, and the fact that it's looking unlikely is disappointing beyond words
It's easy to use deep learning to generate notes that sound like music, in the same way that it's easy to generate text that looks like natural language.
But it's nearly impossible to generate *good* music that way, much like you can't generate a good 2-page story or poem
With two caveats:
1. Plagiarism. If you near-copy large chunks of a good piece, these chunks will be good.
2. Large-scale curation. If you generate thousands of samples and hand-pick the best, they may be good by happenstance (especially for music, where the space is smaller)
However, algorithms (and ML in particular) absolutely do have a role to play in music creation. What's broken is the general approach of statistical mimicry, e.g. raw deep learning.
To generate good music programmatically, you need an algorithmic model of what makes music good.