In his @NVIDIAGTC keynote Jensen Huang demonstrates @NVIDIAAI leading position in powering the AI ecosystem in R&D, enterprise and edge computing, with a zillion new announcements. Here's a few notable ones.
Graph Neural Network acceleration with CUDA-X.
Nemo Megatron allows training GPT-3 scale Large Language Models on distributed hardware.
Distributed Inferencing of very large models with a new version of Triton.
Here's a schematic overview of all the @NVIDIAGTC announcements.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
At #EMNLP2021 Evelina Fedorenko makes a strong case to defuse criticism that neural language models cannot "think". Neither can the human language modules in the brain, she argues, based on human brain studies. #EMNLP2021livetweet
In contrast, due it's predictive coding nature, language is inherently very well-suited to communication. #EMNLP2021livetweet
As far as human brain studies suggest, language is *not suitable for complex thought*, Fedorenko concludes her keynote at #EMNLP2021, as she outlines her future research. #EMNLP2021livetweet
Catch up on recent AI research and code highlights - join @ZetaVector this Friday 1 Oct at 15:00 CET for the monthly "Navigating Current Trends and Topics in AI" webinar.
Expect to learn how Pupil Shapes Reveal GAN-generated Faces, Makeup against Face Recognition, Multimodal Prompt Engineering, CNNs vs Transformers vs MLPs, Primer Evolved Transformer, FLAN, and whether MS MARCO has reached end of life neural retrieval, and much more...
Check out some of the trending topics in AI / ML twitter right now:
In typical space-cowboy style, @ylecun, donning no slides, but only a whiteboard on Zoom, explains how all the various self-supervised models can be unified under an Energy Based view. #NeurIPS#SSL workshop
In fact, @ylecun sketches that the probabilistic view of loss functions for self-supervised training is harmful us as it concentrates all probability mass on the data manifold, obscuring our navigation in the remaining space. #NeurIPS#SSL workshop
En passant, @ylecun points out the trick why BYOL by Grill et al. from @DeepMind does not collapse despite the lack of negative examples: a magic batch normalization.