DALL-E is the kind of application that you'd expect deep learning to be able to pull off in theory (people have been building various early prototype of text-guided image generation since 2015) that becomes really magical when done at a crazy scale.
As usual with deep learning, scaling up is paying off.
In the future, we'll have applications that generate photorealistic movies from a script, or new video games from a description. It's only a matter of years at this point.
One application that I hope someone will build within 15 years: high-fidelity renderings of your dreams from EEG recordings.
Something that is super tractable to build right now if you invest in the right dataset, and that would have a lot of practical value, would be a rendering aid for animated movies.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with François Chollet

François Chollet Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @fchollet

5 Jan
Here's an overview of key adoption metrics for deep learning frameworks over 2020: downloads, developer surveys, job posts, scientific publications, Colab usage, Kaggle notebooks usage, GitHub data.

TensorFlow/Keras = #1 deep learning solution.
Note that we benchmark adoption vs Facebook's PyTorch because it is the only TF alternative that registers on the scale. Another option would have been sklearn, which has massive adoption, but it isn't really a TF alternative. In the future, I hope we can add JAX.
TensorFlow has seen 115M downloads in 2020, which nearly doubles its lifetime downloads. Note that this does *not* include downloads for all TF-adjacent packages, like tf-nightly, the old tensorflow-gpu, etc.
Read 13 tweets
4 Jan
Here's a word-level text generation example with LSTM, starting from raw text files, in less than 50 lines of Keras & TensorFlow. colab.research.google.com/drive/1B9yLXcJ…
Of course, I should point out it's not 50 lines because Keras has some kind of built-in solution for text generation (it doesn't). It's 50 lines because Keras makes it easy to implement anything. It only uses generic features.
It uses a utility to read text files, a text vectorization layer (useful for any NLP), the LSTM layer and the functional API, the callbacks infrastructure, and the default training loop.
Read 4 tweets
1 Jan
The thing with pointing out "AI can't do X!" is that, if you keep refining X into something narrow and precise enough, you'll eventually cross a threshold where a realistic amount of engineering and training data make X possible.
AI can always do *specific* things -- as long as they're sufficiently specific and you're investing sufficient effort / data.

The problem with AI isn't that it can't do a specific X, it's that it has basically no intelligence at all at this time. No general cognitive abilities.
Intelligence simply means moving to a different part of the specificity / effort spectrum, one where you can master broad tasks with little effort.

You can always make up for a lack of intelligence by reducing task uncertainty (making X more specific) or investing more effort.
Read 5 tweets
31 Dec 20
The Turing test was *never* a relevant goal for AI. We should remember that Turing never intended it as a literal test to be passed by a machine designed for that purpose, but as a philosophical device in an argument about the nature of thinking.

fastcompany.com/90590042/turin…
The major flaw of the Turing test is that it entirely abdicates the responsibility of defining intelligence and how to evaluate it (the value of a test). Instead, it delegates the task to human judges, who themselves don't have a proper definition or a proper evaluation process.
As a result, the Turing test does not at all provide incentives to develop greater intelligence, it solely encourages developers to figure out how to trick humans into believing a chatbot is intelligent.
Read 5 tweets
26 Dec 20
I keep coming back to the importance of self-image in one's life trajectory. You become who you believe you are. You do what you believe you can do.
Belief is a greater determinant than ability or environment.
"Man often becomes what he believes himself to be. If I keep on saying to myself that I cannot do a certain thing, it is possible that I may end by really becoming incapable of doing it...."
Read 5 tweets
16 Dec 20
Having to figure things out by yourself is extraordinarily inefficient (plus, risky). The primary benefit of civilization is curriculum optimization: getting you to the right destination while expending the least amount of experience. Civilization is integral to human cognition.
To caricature, you could say that the human brain is merely a short-lived mirror of what constitutes the main body of human cognition: the thought patterns, behaviors, and systems we've collectively evolved over thousands of years.
Your mind reflects the civilization that shaped it -- it wouldn't amount to much without it.
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!