That was fast. News sites are already using DALL-E (mini) to generate fake headline images. This move seems questionable to say the least @NextShark
This is paving the way for a very dangerous practice by media organizations. DALL-E mini is pretty obvious, but once models at the level of #dalle2 and #imagen are widely available, we are in trouble.
Kudos to @OpenAI for making this type of generation impossible with their model and API. That will slow things down a bit...
In the meantime it is important to solidify journalistic standards on this and distribution mechanisms (like Yahoo and Google News) to enforce them.
These models have some journalistic value in making news article illustrations, but not photorealistic images that could be mistaken for photographic evidence.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
UPDATE: We have spent the past month “fine-tuning” our approach for Closed Book QA (CBQA, no access to external knowledge) w/ T5 and now our appendix is overflowing with interesting results and new SoTAs on open domain WebQuestions and TriviaQA!
First, we found that applying REALM's "salient span masking" pre-training technique (i.e., masking named entities and dates) on top of T5 works better than the random "span corruption" we’d tried previously.
Like, much better…
(2/7)
This was enough to push us to SoTA on open domain WebQuestions and TriviaQA, outperforming two models introduced in the last few weeks and significantly improving our Natural Questions results as well.
As promised, we have made the Text-To-Text Transfer Transformer (T5) models much easier to fine-tune for new tasks, and we just released a Colab notebook where you can try it yourself on a free TPU!
👇 tiny.cc/t5-colab
(1/3)
In the Colab, we fine-tune T5 to answer questions *without context*! This forces T5 to "look up knowledge" it obtained during pre-training, and it does surprisingly well.
For example, we can ask "How many legs does a ladybird have?” and it knows! Do you?
hint:🐞
(2/3)
Looking forward to seeing what fun, creative (I'm looking at you #MadeWithMagenta crowd), and/or important tasks you fine-tune T5 on.