A smarter way to discover and organize knowledge in AI and beyond. R&D in Neural Search. Papers and Trends in AI. Enjoy Discovery!
Apr 3 • 7 tweets • 4 min read
Google just published the paper about Gecko, their new text embedding model that punches way above its weight, with its 768-dim vectors being competitive with models that have 7x more parameters and 5x larger embeddings.
Here is a thread with our key takeaways: 🧵🦎
Gecko relies on knowledge distillation from LLMs in the form of synthetic queries, similar to previous work such as InPars and Promptagator.
They propose a two-step approach, where (1) a query is generated given a task description and a passage, and (2) an LLM reranks the top-N retrieved passages, using the highest-scoring one as the positive and the lowest as the negative.
Nov 2, 2022 • 13 tweets • 11 min read
🎉Trends in AI — November 2022 is out on our blog🎉
Big investments, more diffusion models applications, FLAN-T5 from @GoogleAI , Neural Audio Compression from @MetaAI , Single Life RL, and much more by @SergiCastellaSa 👇
Want to dive into #NeurIPS2021 but don't know where to start?
Here're some ideas! A thread🧵👇
1. "A 3D Generative Model for Structure-Based Drug Design" is one of the multiple papers at NeurIPS about drug discovery using neural networks.
This model generates molecules that bind to a specific protein binding site.
In his @NVIDIAGTC keynote Jensen Huang demonstrates @NVIDIAAI leading position in powering the AI ecosystem in R&D, enterprise and edge computing, with a zillion new announcements. Here's a few notable ones.
Graph Neural Network acceleration with CUDA-X.
Nov 8, 2021 • 5 tweets • 3 min read
At #EMNLP2021 Evelina Fedorenko makes a strong case to defuse criticism that neural language models cannot "think". Neither can the human language modules in the brain, she argues, based on human brain studies. #EMNLP2021livetweet
In contrast, due it's predictive coding nature, language is inherently very well-suited to communication. #EMNLP2021livetweet
Sep 27, 2021 • 4 tweets • 2 min read
Catch up on recent AI research and code highlights - join @ZetaVector this Friday 1 Oct at 15:00 CET for the monthly "Navigating Current Trends and Topics in AI" webinar.
zoom.us/webinar/regist…
Expect to learn how Pupil Shapes Reveal GAN-generated Faces, Makeup against Face Recognition, Multimodal Prompt Engineering, CNNs vs Transformers vs MLPs, Primer Evolved Transformer, FLAN, and whether MS MARCO has reached end of life neural retrieval, and much more...
Dec 12, 2020 • 5 tweets • 4 min read
In typical space-cowboy style, @ylecun, donning no slides, but only a whiteboard on Zoom, explains how all the various self-supervised models can be unified under an Energy Based view. #NeurIPS#SSL workshop
In fact, @ylecun sketches that the probabilistic view of loss functions for self-supervised training is harmful us as it concentrates all probability mass on the data manifold, obscuring our navigation in the remaining space. #NeurIPS#SSL workshop