But I'm absolutely *not* making that reduction.
1/N
2/N
(not talking about the more general inductive bias here).
1. the data, how it's collected and formatted.
2. the features, how they are designed
3. the architecture of the model
4. the objective function
5. how it's deployed
When you use raw inputs with no hand-crafted features, as is common in modern DL system, #2 becomes a considerably less important source of designer-caused bias.
E.g. Modern image reco systems work directly from pixels, and generative models produce raw pixels
4/N
5/N
And that bias could very well cause societal bias in the result too.
6/N
My guess is no.
I'm ready to change my opinion in front of theoretical or empirical evidence.
7/N
But again, one may ask whether a *generic* objective (like mean squared error) has built-in societal bias.
My guess is "not much".
But again, I'm ready to change opinion in front of evidence to the contrary.
8/N
The point of my tweet was that this is the case for the face super-res work in question.
9/N
This was the point of my comment about the *relative* importance of paying attention to bias in development and deployment, versus research.
10/N
DL systems have this peculiarity (due the non-convexity of the loss) that they will not develop features for categories of samples that are rare.
This does not happen with logistic regression and such.
11/N
I've been using this method for decades.
FB uses this in its facrec system.
12/N
You spend comparatively more time on rare diseases because you need to develop the "features" for them.
13/N
14/N
I find this very promising, though clearly not the be-all, end-all.
15/N
It's hard sometimes.
But that's why we can call ourselves scientists.
16/N
It only serves to inflame emotions, to hurt people who could be helpful, to mask the real issues, to delay the development of meaningful solutions, and to delay meaningful action.
17/N
N=17.