The typeface is often chosen to mimic the fonts used in the MTR train system, and manga.
Typeface used by a city’s public transport system becomes representative of the city. Here is an experiment by an artist where they swapped the fonts between Hong Kong MTR map and Tokyo Metro:
Great debate with @chamath on CNBC about the deeper structural issues behind $GME and $AMC
Love the comments on YouTube:
“lol this CNBC dude is so concerned about my $200 invested…Fuck man…I never realized how much some people cared about me losing it.”
Apparently, CNBC is trying very hard to remove this full interview and copies of it from YouTube. I wonder why... drive.google.com/file/d/16IV7TI…
Someone called in for a few favors from their broker buddies...
“Huawei tested AI software that could recognize Uighur minorities and alert police. The face-scanning system could trigger a ‘Uighur alarm,’ sparking concerns that the software could help fuel China’s crackdown”
The coolest result in this paper is when they took a depth estimation model (single-image input) trained on natural images (arxiv.org/abs/1907.01341), and showed that the pre-trained model also works on certain types of line drawings, such as drawings of streets and indoor scenes.
This paper seems like an alternative, perhaps complementary take, on @scottmccloud's views about visual abstraction:
Agents with a self-attention “bottleneck” not only can solve these tasks from pixel inputs with only 4000 parameters, but they are also better at generalization!
The agent receives the full input, but we force it to see its world through the lens of a self-attention bottleneck which picks only 10 patches from the input (middle)
The controller's decision is based only on these patches (right)
The agent has better generalization abilities, simply due to its ability to “not see things” that can confuse it.
Trained in the top-left setting only, it can also perform in unseen settings with higher walls, different floor textures, or when confronted with a distracting sign.
SketchTransfer: A Challenging New Task for Exploring Detail-Invariance and the Abstractions Learned by Deep Networks
If we train a neural net to classify CIFAR10 photos but also give it unlabelled QuickDraw doodles, how well can it classify these doodles? arxiv.org/abs/1912.11570
Recent paper by Alex Lamb, @sherjilozair, Vikas Verma + me looks motivated by abstractions learned by humans and machines.
Alex trained SOTA domain transfer methods on labelled CIFAR10 data + unlabelled QuickDraw doodles & reported his findings:
@MILAMontreal@GoogleAI@sherjilozair Alex found SOTA transfer methods (labelled CIFAR10 + unlabelled doodles) achieves ~60% accuracy on doodles. Supervised learning on doodles gets ~90%, leaving ~30% gap for improvement.
Surprisingly, training a model only on CIFAR10 still does quite well on ships, planes & trucks!
“Fudan University, a prestigious Chinese university known for its liberal atmosphere, recently deleted ‘freedom of thought’ from its charter and added paragraphs pledging loyalty to the Chinese Communist Party, further eroding academic freedom in China.”
Students and academics at Fudan University appear to have protested this change. But the video of the group chanting lines from the old charter, shared on social media (domestic WeChat), has since been removed:
“The changes provoked a substantial reaction on Weibo. A professor at Fudan U’s foreign languages school, said on Weibo that the amendment is against the university’s regulations, as no discussions occurred at staff meetings. That post was later deleted.”