Zeta Alpha Profile picture
A smarter way to discover and organize knowledge in AI and beyond. R&D in Neural Search. Papers and Trends in AI. Enjoy Discovery!
Apr 3 7 tweets 4 min read
Google just published the paper about Gecko, their new text embedding model that punches way above its weight, with its 768-dim vectors being competitive with models that have 7x more parameters and 5x larger embeddings.
Here is a thread with our key takeaways: 🧵🦎Image Gecko relies on knowledge distillation from LLMs in the form of synthetic queries, similar to previous work such as InPars and Promptagator.
They propose a two-step approach, where (1) a query is generated given a task description and a passage, and (2) an LLM reranks the top-N retrieved passages, using the highest-scoring one as the positive and the lowest as the negative.Image
Nov 2, 2022 13 tweets 11 min read
🎉Trends in AI — November 2022 is out on our blog🎉

Big investments, more diffusion models applications, FLAN-T5 from @GoogleAI , Neural Audio Compression from @MetaAI , Single Life RL, and much more by @SergiCastellaSa 👇

A thread🧵

zeta-alpha.com/post/trends-in… 🗞NEWS

💸AI content generation startups Jasper.ai raises $125M, @StabilityAI raises $101M both at >1B valuation

💀 Argo (self driving research by Ford & VW) once valued at $7Bn is shutting down

🇨🇳🇺🇸 US tightens bans on chip tech exports to China
Sep 21, 2022 10 tweets 7 min read
To all of those who missed the Transformers at Work workshop last Friday, we have just published all the talks on our YouTube channel🎉

Here's the YouTube playlist with all of them + a thread🧵

youtube.com/playlist?list=… 1/9 Introduction by @jakubzavrel and @SergiCastellaSa from @ZetaVector

Company news, future, a little history on Transformers, and an overview of the workshop!

Dec 21, 2021 4 tweets 2 min read
🎙We're thrilled to share our new podcast on Neural Information Retrieval, co-hosted by @SergiCastellaSa and @andrewyates🎉

❓We're centering each episode around a recent IR paper. First up: "Shallow Pooling for Sparse Labels" by @NegarEmpr et al.

open.spotify.com/show/2X9ymNhv4… Available on other podcasting platforms under the name "Neural Information Retrieval Talks" (+overcast, )

Apple Pods: podcasts.apple.com/es/podcast/neu…

Google Pods: podcasts.google.com/feed/aHR0cHM6L…

Overcast:
Dec 7, 2021 12 tweets 10 min read
Want to dive into #NeurIPS2021 but don't know where to start?

Here're some ideas! A thread🧵👇 1. "A 3D Generative Model for Structure-Based Drug Design" is one of the multiple papers at NeurIPS about drug discovery using neural networks.

This model generates molecules that bind to a specific protein binding site.

By Shitong Luo et al.

papers.nips.cc/paper/2021/has…
Nov 9, 2021 5 tweets 2 min read
In his @NVIDIAGTC keynote Jensen Huang demonstrates @NVIDIAAI leading position in powering the AI ecosystem in R&D, enterprise and edge computing, with a zillion new announcements. Here's a few notable ones. Graph Neural Network acceleration with CUDA-X.
Nov 8, 2021 5 tweets 3 min read
At #EMNLP2021 Evelina Fedorenko makes a strong case to defuse criticism that neural language models cannot "think". Neither can the human language modules in the brain, she argues, based on human brain studies. #EMNLP2021livetweet In contrast, due it's predictive coding nature, language is inherently very well-suited to communication. #EMNLP2021livetweet
Sep 27, 2021 4 tweets 2 min read
Catch up on recent AI research and code highlights - join @ZetaVector this Friday 1 Oct at 15:00 CET for the monthly "Navigating Current Trends and Topics in AI" webinar.

zoom.us/webinar/regist… Expect to learn how Pupil Shapes Reveal GAN-generated Faces, Makeup against Face Recognition, Multimodal Prompt Engineering, CNNs vs Transformers vs MLPs, Primer Evolved Transformer, FLAN, and whether MS MARCO has reached end of life neural retrieval, and much more...
Dec 12, 2020 5 tweets 4 min read
In typical space-cowboy style, @ylecun, donning no slides, but only a whiteboard on Zoom, explains how all the various self-supervised models can be unified under an Energy Based view. #NeurIPS #SSL workshop In fact, @ylecun sketches that the probabilistic view of loss functions for self-supervised training is harmful us as it concentrates all probability mass on the data manifold, obscuring our navigation in the remaining space. #NeurIPS #SSL workshop