Plenty of ballots left to count (and remember, things will get bluer over time as mail-in ballots get tallied). But I hoped we'd see a clear winner tonight, and the fact that it's looking unlikely is disappointing beyond words
Time and time again, I keep believing in everything good about America, and America keeps breaking my heart.
Like in 2016, I'm taken by surprise. I am fundamentally unable to foresee these things, because it would mean having such a low view of America.
It's like if your best friend turns out to be a murderer. You couldn't foresee it -- you wouldn't have been their friend if you could.
I just can't come to terms with the awfulness. So I just keep getting my heart broken with every news
Another thing that keeps surprising me after 11 years on Twitter is the awfulness of the replies. I always want to believe that it's because aggressive, bitter assholes are far more likely to reply than regular folks -- since the alternative is that we're surrounded by assholes
• • •
Missing some Tweet in this thread? You can try to
force a refresh
It's easy to use deep learning to generate notes that sound like music, in the same way that it's easy to generate text that looks like natural language.
But it's nearly impossible to generate *good* music that way, much like you can't generate a good 2-page story or poem
With two caveats:
1. Plagiarism. If you near-copy large chunks of a good piece, these chunks will be good.
2. Large-scale curation. If you generate thousands of samples and hand-pick the best, they may be good by happenstance (especially for music, where the space is smaller)
However, algorithms (and ML in particular) absolutely do have a role to play in music creation. What's broken is the general approach of statistical mimicry, e.g. raw deep learning.
To generate good music programmatically, you need an algorithmic model of what makes music good.
Three things we've released recently that I'm extremely excited about:
1. TensorFlow Cloud: add one-line to your notebook or project to start training your model in the cloud in a distributed way. keras.io/guides/trainin…
2. Keras Preprocessing Layers: build end-to-end models that take as input raw strings or raw structured data samples. Handles string splitting, feature value indexing & encoding, image data augmentation, etc.
Facebook says fanning the flames of hate gets you more engagement, and it's ok to do it because it happened before, in the 1930s, with nothing bad coming from it
To quote @Grady_Booch: Facebook is a profoundly unethical company, and it starts at the top.
Fully aware of its own immense influence power, FB deliberately decides to use it in service of far-right radicalization, in order to create "engagement".
Honestly the take "the fact that it happened in the 1930s shows that it's part of human nature and therefore it's fine to encourage it" blows my mind.
Of course it's part of human nature. This realization is at the core of what "never again" means.
This is a strange take -- in virtually every country the center-left has been pro-lockdown and the far-right has been anti-lockdowns (the center-right is usually pro-lockdowns as well, but not as much as the center-left).
If it were stochastic there would be many exceptions.
In general, it's helpful to look at the rest of the world to understand the US, since it highlights what's unique about the US and what's just a manifestation of broader trends and general equilibria.
I think the dynamic at play here is:
"trust in expert + value human life -> pro-lockdown"
"anti-intellectualism and anti-expertise + value 'individual freedom' over human life -> anti-lockdown"
Saying that bias in AI applications is "just because of the datasets" is like saying the 2008 crisis was "just because of subprime mortgages".
Technically, it's true. But it's singling out the last link in the causality chain while ignoring the entire system around it.
Scenario: you've shipped an automated image editing feature, and your users are reporting that it treats faces very differently based on skin color. What went wrong? The dataset?
1. Why was the dataset biased in the 1st place? Bias in your product? At data collection/labeling?
2. If you dataset was biased, why did you end up using it as-is? What are your processes to screen for data bias and correct it? What biases are you watching out for?