Multimodal transformers achieve impressive results on many tasks like Visual Question Answering and Image Retrieval, but what contributes most to their success? dpmd.ai/3h8u23Z (1/)
This work explores how different architecture variations, pretraining datasets, and losses impact multimodal transformers’ performance on image retrieval: dpmd.ai/3eENAtF

(By Lisa Anne Hendricks, John Mellor, Rosalia Schneider, @jalayrac & @aidanematzadeh) (2/)
Multimodal transformers outperform simpler dual encoder architectures when the amount of data is held constant. Interestingly, larger datasets don’t always improve performance. (3/)
This work shows that language similarity between between pretraining and task and dataset noise are important. Also, the masked region modelling loss and a contrastive image-text matching loss do not contribute to models’ performance. (4/4)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with DeepMind

DeepMind Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @DeepMind

10 Dec 20
For #NeurIPS2020, we spoke with @wojczarnecki about Spinning Tops, advice he wish he received as a student, and his goals for next year! #PeopleBehindThePapers Image
AI has been extremely successful in real world games (GO, DOTA, StarCraft) with results coming from relatively simple multi-agent algorithms. In this paper, we hypothesise that they share a common geometry - Spinning Tops. Learn more: bit.ly/3qI8RrD #NeurIPS2020 Image
I’ve always loved biology. During my masters I decided to take a handful of neurophysiology courses - which I found to be interesting. But eventually I realised that my true strengths were in mathematical sciences. A career in ML and AI became a natural way to combine the two. Image
Read 5 tweets
1 Dec 20
Yesterday we shared the news that #AlphaFold has been recognised as a solution to the ‘protein folding problem’ by #CASP14, the biennial Critical Assessment of Protein Structure Prediction. But what exactly is protein folding, and why is it important? A thread… (1/6)
Proteins are the building blocks of life - they underpin the biological processes in every living thing. If you could unravel a protein you would see that it’s like a string of beads made of a sequence of different chemicals known as amino acids. (2/6)
Interactions between these amino acids make the protein fold, as it finds its shape out of almost limitless possibilities. For decades, scientists have been trying to find a method to reliably determine a protein’s structure just from its sequence of amino acids. (3/6)
Read 6 tweets
9 Jun 20
We have research scientist @seb_ruder up next with more #AtHomeWithAI recommendations!

He suggests the Deep Learning Book from @mitpress for a comprehensive introduction to the fundamentals of DL: bit.ly/351qMzb (1/7)
Overwhelmed with the number of available machine learning courses? @seb_ruder recommends taking a look through @venturidb’s curated - and ranked - list available on @freeCodeCamp.

bit.ly/3erZEN4 #AtHomeWithAI
Do you have a technical background? Are you looking for an introduction to natural language processing?

Sebastian recommends the @fastdotai course, “A Code-First Introduction to Natural Language Processing”.

bit.ly/3esFtP8 #AtHomeWithAI
Read 7 tweets
8 Jun 20
We’re back with more #AtHomeWithAI researcher recommendations. Next up is research scientist @csilviavr with suggestions for resources to learn about causal inference! (1/5) Image
Her first suggestion is “The Book of Why” by @yudapearl & Dana Mackenzie.

According to Silvia, this is best for those looking for an introduction to the topic: bit.ly/30isGej #AtHomeWithAI
Need a more in-depth look at causal inference? Silvia suggests reading through “Causal Inference in Statistics: A Primer” by @yudapearl, @MadelynTheRose & @NP_Jewell.

bit.ly/36xdvza #AtHomeWithAI
Read 5 tweets
27 May 20
Looking for a few more favourite resources from the team? Today’s #AtHomeWithAI picks are from research scientist @TaylanCemgilML! (1/6)
His first recommendation is for those looking to learn about the basics of probabilistic reasoning and modelling.

He suggests “Bayesian Reasoning and Machine Learning” [longer read] by @davidobarber. Read it for free here: bit.ly/3cG99rS #AtHomeWithAI
Are you a beginner looking for a lesson on the Monte Carlo method?

Taylan’s own, “A Tutorial Introduction to Monte Carlo methods, Markov Chain Monte Carlo and Particle Filtering” is available here: bit.ly/3cAQ8XG #AtHomeWithAI
Read 6 tweets
21 May 20
We’re back with the latest set of #AtHomeWithAI researcher recommended resources, this time from research scientist @AdamMarblestone! (1/7) Image
Adam suggests class materials from @Stanford if students are looking for ideas on computational models of the neocortex.

Follow along here: stanford.io/2XWiNlB #AtHomeWithAI
Need a resource that covers the essentials of linear algebra for AI? This online lecture by #gilbertstrang and @broadinstitute does just that.

Watch it here: bit.ly/3buHbi6 #AtHomeWithAI
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(