Adam Roberts Profile picture
ai researcher @ Google DeepMind :: language (T5, PaLM) & ♫ (MusicVAE, NSynth, MusicLM, SingSong) :: t5x & seqio // recovering comp biologist
Jun 30, 2022 4 tweets 2 min read
That was fast. News sites are already using DALL-E (mini) to generate fake headline images. This move seems questionable to say the least @NextShark This is paving the way for a very dangerous practice by media organizations. DALL-E mini is pretty obvious, but once models at the level of #dalle2 and #imagen are widely available, we are in trouble.
Apr 29, 2020 7 tweets 4 min read
UPDATE: We have spent the past month “fine-tuning” our approach for Closed Book QA (CBQA, no access to external knowledge) w/ T5 and now our appendix is overflowing with interesting results and new SoTAs on open domain WebQuestions and TriviaQA!

arxiv.org/abs/2002.08910

(1/7) First, we found that applying REALM's "salient span masking" pre-training technique (i.e., masking named entities and dates) on top of T5 works better than the random "span corruption" we’d tried previously.

Like, much better…

(2/7)
Dec 9, 2019 5 tweets 3 min read
As promised, we have made the Text-To-Text Transfer Transformer (T5) models much easier to fine-tune for new tasks, and we just released a Colab notebook where you can try it yourself on a free TPU!
👇
tiny.cc/t5-colab

(1/3) In the Colab, we fine-tune T5 to answer questions *without context*! This forces T5 to "look up knowledge" it obtained during pre-training, and it does surprisingly well.

For example, we can ask "How many legs does a ladybird have?” and it knows! Do you?

hint:🐞

(2/3)