Adam Roberts Profile picture
Jun 30 4 tweets 2 min read
That was fast. News sites are already using DALL-E (mini) to generate fake headline images. This move seems questionable to say the least @NextShark
This is paving the way for a very dangerous practice by media organizations. DALL-E mini is pretty obvious, but once models at the level of #dalle2 and #imagen are widely available, we are in trouble.
Kudos to @OpenAI for making this type of generation impossible with their model and API. That will slow things down a bit...

In the meantime it is important to solidify journalistic standards on this and distribution mechanisms (like Yahoo and Google News) to enforce them.
These models have some journalistic value in making news article illustrations, but not photorealistic images that could be mistaken for photographic evidence.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Adam Roberts

Adam Roberts Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ada_rob

Apr 29, 2020
UPDATE: We have spent the past month “fine-tuning” our approach for Closed Book QA (CBQA, no access to external knowledge) w/ T5 and now our appendix is overflowing with interesting results and new SoTAs on open domain WebQuestions and TriviaQA!

arxiv.org/abs/2002.08910

(1/7)
First, we found that applying REALM's "salient span masking" pre-training technique (i.e., masking named entities and dates) on top of T5 works better than the random "span corruption" we’d tried previously.

Like, much better…

(2/7)
This was enough to push us to SoTA on open domain WebQuestions and TriviaQA, outperforming two models introduced in the last few weeks and significantly improving our Natural Questions results as well.

(3/7)
Read 7 tweets
Dec 9, 2019
As promised, we have made the Text-To-Text Transfer Transformer (T5) models much easier to fine-tune for new tasks, and we just released a Colab notebook where you can try it yourself on a free TPU!
👇
tiny.cc/t5-colab

(1/3)
In the Colab, we fine-tune T5 to answer questions *without context*! This forces T5 to "look up knowledge" it obtained during pre-training, and it does surprisingly well.

For example, we can ask "How many legs does a ladybird have?” and it knows! Do you?

hint:🐞

(2/3)
Looking forward to seeing what fun, creative (I'm looking at you #MadeWithMagenta crowd), and/or important tasks you fine-tune T5 on.

Let us know!

(3/3)
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(