Given that the @TheSun published a piece with statements and new samples from the authors in a way that is everything we are worried about in the ML community and we want to avoid and prevent, I was wrong.
This sort of scientific communication is indefensible and appalling.
I'm speechless given the quotes of the authors in the article and how they could think that it is in any way helpful to the sensitive debates that are happening.
It's like a nightmare come true, and the way the findings are presented in the article is harmful.
How are we supposed to support a discussion of existing biases in a descriptive way that is not harmful and looks at historical and contemporary trends, when the authors then use it in a prescriptive way out of context?
This is not acceptable communication of research results.
I'm sad that I was wrong because I really feel that lately discourse has got out of hand and has been rather vitriolic too often. It's sad for academic discourse.
However, given this article, I might have just been too naive to see what many others were already seeing.
Their conduct looks like it could bring into disrepute the field of research and what @ecolenormalesup and @cnrs stand for and ought to be reconsidered or investigated at the very least.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
We use Fisher information to connect recent approaches (BADGE, BAIT, LogDet objectives from SIMILAR and PRISM) with approximations of information-theoretic quantities (EIG, EPIG, ...) based on literature going back to Lindley (1956) and MacKay (1992)
2/11
First, we propose a useful notation for Fisher information and observed information. It is based on the insight that both are just the second derivative of the entropy/information content: the former over a random variable in expectation and the latter for an observation.
We show that the approximation is actually an upper bound and characterize the approximation error
We compare the approximation to the exact binomial coefficients and see that the approximation error is negligible and our simple estimate of the approximation error is surprisingly accurate
Students living in university or college accommodation at Oxford don't have tenancy agreements but "license" agreements, which give colleges lots of leeway and students practically no rights
Colleges have been preventing students from going back to their rooms. In some cases, students only get a 2-hour slot to get their stuff out now, and in others, colleges have refused to return deposits in a timely manner. Looking at you @KelloggOx
Our paper "Unpacking Information Bottlenecks: Unifying Information-Theoretic Objectives in Deep Learning" shows that well-known dropout regularization with standard cross-entropy loss and simple regularizers optimizes IB objectives in modern DNN architectures.
We use Information Diagrams to provide grounded intuitions and review existing variants of the IB objective.
From this, we rearrange the objective to focus on what we call the Decoder Uncertainty H[Y|Z] as loss term and Reverse Decoder Uncertainty H[Z|Y] as regularization term.
I'll wear my conspiracy hat for a second: given all we know and all that must have been known by stakeholders earlier, is the current inaction towards stricter containment gross negligence due to stupidity and recklessness by our leaders or are some condoning the consequences:
More old people are gonna die and people with preconditions. People who are more vulnerable but also put more pressure on our health care and welfare systems than young and healthy working-age folks. Is this happening at the moment?