“Huawei tested AI software that could recognize Uighur minorities and alert police. The face-scanning system could trigger a ‘Uighur alarm,’ sparking concerns that the software could help fuel China’s crackdown”
Entire sub-industries in AI / Tech has emerged in authoritarian countries that is totally “unaware that automating racial recognition algorithms would even be controversial”
Rather than a longer Twitter thread, here's an article in @Nature worth reading about “the ethical questions that haunt facial-recognition research” which discusses some of these issues in depth, with several front line AI researchers with different views. nature.com/articles/d4158…
At the end of the day, the @NeurIPSConf Statement of Impact won’t solve our problems, and may even be weird for many papers, but I do believe it is a step in the right direction.
I don't believe the goal of NeurIPS Impact Statement is to censor papers, but rather to make researchers more aware of ethical implications of their work. Hopefully when PhD students graduate, this thought process will remain in their work.
The coolest result in this paper is when they took a depth estimation model (single-image input) trained on natural images (arxiv.org/abs/1907.01341), and showed that the pre-trained model also works on certain types of line drawings, such as drawings of streets and indoor scenes.
This paper seems like an alternative, perhaps complementary take, on @scottmccloud's views about visual abstraction:
Agents with a self-attention “bottleneck” not only can solve these tasks from pixel inputs with only 4000 parameters, but they are also better at generalization!
The agent receives the full input, but we force it to see its world through the lens of a self-attention bottleneck which picks only 10 patches from the input (middle)
The controller's decision is based only on these patches (right)
The agent has better generalization abilities, simply due to its ability to “not see things” that can confuse it.
Trained in the top-left setting only, it can also perform in unseen settings with higher walls, different floor textures, or when confronted with a distracting sign.
SketchTransfer: A Challenging New Task for Exploring Detail-Invariance and the Abstractions Learned by Deep Networks
If we train a neural net to classify CIFAR10 photos but also give it unlabelled QuickDraw doodles, how well can it classify these doodles? arxiv.org/abs/1912.11570
Recent paper by Alex Lamb, @sherjilozair, Vikas Verma + me looks motivated by abstractions learned by humans and machines.
Alex trained SOTA domain transfer methods on labelled CIFAR10 data + unlabelled QuickDraw doodles & reported his findings:
@MILAMontreal@GoogleAI@sherjilozair Alex found SOTA transfer methods (labelled CIFAR10 + unlabelled doodles) achieves ~60% accuracy on doodles. Supervised learning on doodles gets ~90%, leaving ~30% gap for improvement.
Surprisingly, training a model only on CIFAR10 still does quite well on ships, planes & trucks!
“Fudan University, a prestigious Chinese university known for its liberal atmosphere, recently deleted ‘freedom of thought’ from its charter and added paragraphs pledging loyalty to the Chinese Communist Party, further eroding academic freedom in China.”
Students and academics at Fudan University appear to have protested this change. But the video of the group chanting lines from the old charter, shared on social media (domestic WeChat), has since been removed:
“The changes provoked a substantial reaction on Weibo. A professor at Fudan U’s foreign languages school, said on Weibo that the amendment is against the university’s regulations, as no discussions occurred at staff meetings. That post was later deleted.”
@pizzahut@discoverhk It has also been reported that dishwashing liquid and bags of marbles have been used by protesters before to slow down the riot police 💡