Three things we've released recently that I'm extremely excited about:
1. TensorFlow Cloud: add one-line to your notebook or project to start training your model in the cloud in a distributed way. keras.io/guides/trainin…
2. Keras Preprocessing Layers: build end-to-end models that take as input raw strings or raw structured data samples. Handles string splitting, feature value indexing & encoding, image data augmentation, etc.
Facebook says fanning the flames of hate gets you more engagement, and it's ok to do it because it happened before, in the 1930s, with nothing bad coming from it
To quote @Grady_Booch: Facebook is a profoundly unethical company, and it starts at the top.
Fully aware of its own immense influence power, FB deliberately decides to use it in service of far-right radicalization, in order to create "engagement".
Honestly the take "the fact that it happened in the 1930s shows that it's part of human nature and therefore it's fine to encourage it" blows my mind.
Of course it's part of human nature. This realization is at the core of what "never again" means.
This is a strange take -- in virtually every country the center-left has been pro-lockdown and the far-right has been anti-lockdowns (the center-right is usually pro-lockdowns as well, but not as much as the center-left).
If it were stochastic there would be many exceptions.
In general, it's helpful to look at the rest of the world to understand the US, since it highlights what's unique about the US and what's just a manifestation of broader trends and general equilibria.
I think the dynamic at play here is:
"trust in expert + value human life -> pro-lockdown"
"anti-intellectualism and anti-expertise + value 'individual freedom' over human life -> anti-lockdown"
Saying that bias in AI applications is "just because of the datasets" is like saying the 2008 crisis was "just because of subprime mortgages".
Technically, it's true. But it's singling out the last link in the causality chain while ignoring the entire system around it.
Scenario: you've shipped an automated image editing feature, and your users are reporting that it treats faces very differently based on skin color. What went wrong? The dataset?
1. Why was the dataset biased in the 1st place? Bias in your product? At data collection/labeling?
2. If you dataset was biased, why did you end up using it as-is? What are your processes to screen for data bias and correct it? What biases are you watching out for?
I think it's clear that for many smaller companies that invested in deep learning, it turned out not to be essential and got cut post-Covid as part of downsizings. There are somewhat fewer people doing deep learning now than half a year ago, for the first time since at least 2010
This is evident in particular in deep learning job postings, which collapsed in the past 6 months
A thing I hear sometimes: "what if my loss doesn't match the signature loss = fn(y_true, y_pred)?"
This is not a requirement in Keras -- it's only the default setting. If you have a loss with multiple inputs/targets, here are your options, in order of preference: (a thread)