Interesting analysis by @mhmazur. Human work is driven by clear goals and is informed by task-specific context. A model that is optimized for generating plausible-sounding text, ignoring goals and context, virtually never produces any useful answer (unless by random chance).
Reminder: language serves a variety of purposes -- transmit information, act on the world to achieve specific goals, serve as a social lubricant, etc. Language cannot be modeled as a statistical distribution independent of these purposes.
This is akin to modeling the appearance of animals as a statistical distribution while ignoring the environment in which they live. You could use such a model to generate plausible-looking animals, but don't expect them to be able to survive in the wild (environmental fitness)
Animals evolved to fit their environment -- everything about them (us) is a product of environmental constraints. Likewise language is a construct evolved to fit a specific set of functions, and you cannot model it independently from this context.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with François Chollet

François Chollet Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @fchollet

14 Feb
There's a pretty strong relationship between one's self-image as a dispassionate rational thinker and the degree to which one is susceptible to fall for utterly irrational beliefs that are presented with some sort of scientific veneer
The belief in recursive intelligence explosion is a good example: only someone who thinks of themselves as a very-high-IQ hyper-rationalist could be susceptible to buy into it
If you want to fool a nerd, make long, complex, overly abstract arguments, free from the shackles of reality. Throw equations in there. Use physics analogies. Maybe a few greek words
Read 5 tweets
14 Feb
An event that only happens once can have a probability (before it happens): this probability represents the uncertainty present in your model of why that event may happen. It's really a property of your model of reality, not a property of the event itself.
Of course, if the event has never happened before, that implies that your model of how it happens has never been validated in practice. You can model the uncertainty present in what you know you don't know, but you'll miss what you don't know you don't know.
But that doesn't mean your model is worthless. Surely we all have the experience of writing a large piece of code and having it work on first try.
Read 4 tweets
14 Feb
An under-appreciated feature of our present is how we record almost everything -- far more data than we can analyze. Future historians will be able to reconstruct and understand our time far better than we perceive and understand it right now.
Consider the events of January 6. Future historians will likely know who was there, who said what to whom, who did what, minute by minute. The amount of information you can recover from even a single video is enormous, and we have hundreds of them.
We're recording all of the dots -- our successors will have currently-unimaginable technology to connect them.
Read 4 tweets
16 Jan
2020 was definitely a step backwards. If you're wondering how great civilizations can end up collapsing: they just have many 2020s in a row over several decades, with exponentially compounding cascade effects at each new development.
Factors of decline are multiplicative. E.g. cultural & educational deterioration leads to an incompetent government. An incompetent government makes a pandemic much worse. A bad pandemic accelerates institutional decline
For the record, I don't think civilization will collapse in the near future (within the next 400 years). Not even as a consequence of catastrophic climate change over the next two centuries. But we will go through some pretty rough patches
Read 6 tweets
6 Jan
DALL-E is the kind of application that you'd expect deep learning to be able to pull off in theory (people have been building various early prototype of text-guided image generation since 2015) that becomes really magical when done at a crazy scale.
As usual with deep learning, scaling up is paying off.
In the future, we'll have applications that generate photorealistic movies from a script, or new video games from a description. It's only a matter of years at this point.
Read 5 tweets
5 Jan
Here's an overview of key adoption metrics for deep learning frameworks over 2020: downloads, developer surveys, job posts, scientific publications, Colab usage, Kaggle notebooks usage, GitHub data.

TensorFlow/Keras = #1 deep learning solution.
Note that we benchmark adoption vs Facebook's PyTorch because it is the only TF alternative that registers on the scale. Another option would have been sklearn, which has massive adoption, but it isn't really a TF alternative. In the future, I hope we can add JAX.
TensorFlow has seen 115M downloads in 2020, which nearly doubles its lifetime downloads. Note that this does *not* include downloads for all TF-adjacent packages, like tf-nightly, the old tensorflow-gpu, etc.
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!