I don't consider myself a deep learning expert by any means. There are still a lot more things I don't know than things I know (it's not even close). I've only been working with neural networks since 2009, which is a lot less than many of you.
Besides, I'm not sure that "deep learning experts" exist. People with the highest h-index can't write a GPU kernel or design a DL ASIC. Nor could they win a Kaggle competition. Nor, for the most part, write reusable code (which is really the core of DL).
Not only that, but when I chat with experts, I'm often surprised by how few of them seem to have a clear mental model of what DL is and how it works. In fact, many big-name researchers often say things that are manifestly untrue and easy to disprove!
Consider that, not long ago, most AI experts knew for a fact that neural networks were a failed avenue. Consider that, in 2013, most of the top names in computer vision were saying that the nascent success of DL might be just a fluke. And remember the debates about local minima?
In general, I'm also not a fan of the idea of an "expert". It makes it sound like there's some threshold of knowledge beyond which you know it all, you've made it (perhaps the threshold is when you reach full professorship).

I don't think that's how it works.
If someone tells you they're a top expert, a pioneer, the main thing they're an expert at is playing status games. The same people will probably also try to demean those they feel are in competition with them, because that's how status games work.
As for me, I'm just someone who's been trying to learn as much as possible (not just about AI). That's how I'd define myself: someone who gets excited about stuff and learns about it. If there's an "expert threshold", I hope I never reach it.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with François Chollet

François Chollet Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @fchollet

26 Mar
When smart people are presented with something new, they tend to ask, "how does it work?": how is it structured, how was it made? But the more important & difficult question is *why does it work*: what is the functional kernel that makes it effective, what guided its evolution?
In the case of deep learning, "how does it work?" will make you explain backpropagation and matrix multiplication. But "why does it work?" leads you to the structure of perceptual space.
In the case of a piece of music, "how does it work?" will make you look for the key, the different voices, the rules. That's the easy part. "Why" leads you to ask what exactly about the piece makes you feel the way you feel. It will require you to understand your own mind.
Read 5 tweets
20 Mar
Deep learning excels at unlocking the creation of impressive early demos of new applications using very little development resources.

The part where it struggles is reaching the level of consistent usefulness and reliability required by production usage.
Autonomous driving is the ultimate example. You could use deep learning to create an impressive self-driving car prototype in 2015 on a shoestring budget (Comma did exactly that, using Keras). Five years and billions of $ later, the best DL-centric driving systems are still L2+.
Every app demo based on GPT-3 follows this pattern. You can build the demo in a weekend, but if you invest $20M and 3 years fleshing out the app, it's unlikely it will still be using GPT-3 at all, and it may ever meet customer requirements
Read 4 tweets
13 Mar
Quick tweetorial: using KerasTuner to find good model configs.

Define your model as usual -- but put your code in a function that takes a `hp` (hyperparameters) argument.

Then, instead of using values like "embedding_dim = 512", use ranges: `hp.Int(...)`
Then, instantiate a tuner and pass it your model building function. It will need an `objective` to optimize -- this could the name be any metric found in the model logs. For built-in Keras metrics, the tuner will automatically pick whether to maximize or minimize the metric.
`max_trials` is the maximum number of model configurations to try. The ominous-sounding `executions_per_trial` is the number of model training runs to average for each model config: a higher value reduces results variance.
Read 4 tweets
7 Mar
Fun fact: if you wanted to keep an open-air swimming pool on the surface of Mars, you'd have to keep it heated at a temperature exactly between 0°C and 0.5°C (about 32°F). Because the atmospheric pressure on Mars is so low, water would boil if its temperature got any higher.
And any lower than that, it would freeze (which would be the default given that the surrounding atmosphere would be at around -60°C / -80°F)
Now, fun medical puzzle: if you took off your spacesuit on the surface of Mars, what would immediately happen to you? Would you...
Read 4 tweets
3 Mar
New code walkthrough on keras.io: speech recognition with Transformer. Very readable and concise demonstration of how to build and train a speech recognition model on the LJSpeech dataset.
keras.io/examples/audio…
This example was implemented by @NandanApoorv. Let's take a look at the model architecture.

It starts by defining two embedding layers: a positional embedding for text tokens, and an embedding for speech features, that uses 1D convolutions with strides for downsampling.
Then it defines a Transformer encoder, which is your usual Transformer block, as well as a Transformer decoder, which is also your usual Transformer block, but with causal attention to prevent later timesteps to influence the decoding of earlier timesteps.
Read 4 tweets
22 Feb
Seeing lots of takes about nuclear power and its opponents. Yes, nuclear power could be an important element of a climate solution. Yes, the world needs to build more nuclear power plants. But it's absurd to blame environmental activists for the fact that it hasn't happened yet.
The primary reason why countries with large CO2 emissions haven't gone nuclear is economic: the upfront cost of a nuclear plant is a large multiple of that of a coal plant. That's why coal is king in India, for instance. Nothing to do with activists.
Or consider China, the largest emitter of CO2 today. You think environmental activism is why China hasn't built more nuclear plants? Lol. Economically, coal has been "good enough" -- assuming we ignore its health costs and long-term environmental costs.
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!