Within 10-20 years, nearly every branch of science will be, for all intents and purposes, a branch of computer science.

Computational physics, comp chemistry, comp biology, comp medicine... Even comp archeology. Realistic simulations, big data analysis, and ML everywhere
This tweet is infuriating many, apparently. Imagine the controversy if, in 2000, someone predicted that by 2020 most companies would be tech companies! Still true though. Good thing they didn't have Twitter back then :)
Don't worry though, your domain expertise will remain very important. Just like how... uh... having a strong linguistics background is essential in natural language processing (formerly computational linguistics)...
Anyway, "most science will be CS", just like "most companies will be tech cos", is a prediction you should take seriously, but not literally. It means that CS proficiency will soon be indispensable to staying relevant as a scientist: most what you will do will require CS.
In the same way that "most cos will be tech cos" means that tech proficiency will be essential to staying in business: most of your operations will critically require tech. Walmart, AXA, FedEx, etc. are "tech companies".
It doesn't mean that chemistry will be literally classified as a subfield of CS, or that Walmart will be literally classified as a tech company. Obviously...
But it does mean that, if you were a business executive in 2000, you should hire people who understand tech (including at top levels), and if you're a scientist today, you should make sure that you develop your CS chops (including ML).

• • •

Missing some Tweet in this thread? You can try to force a refresh

Keep Current with François Chollet

François Chollet Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!


Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @fchollet

25 Apr
I don't consider myself a deep learning expert by any means. There are still a lot more things I don't know than things I know (it's not even close). I've only been working with neural networks since 2009, which is a lot less than many of you.
Besides, I'm not sure that "deep learning experts" exist. People with the highest h-index can't write a GPU kernel or design a DL ASIC. Nor could they win a Kaggle competition. Nor, for the most part, write reusable code (which is really the core of DL).
Not only that, but when I chat with experts, I'm often surprised by how few of them seem to have a clear mental model of what DL is and how it works. In fact, many big-name researchers often say things that are manifestly untrue and easy to disprove!
Read 7 tweets
26 Mar
When smart people are presented with something new, they tend to ask, "how does it work?": how is it structured, how was it made? But the more important & difficult question is *why does it work*: what is the functional kernel that makes it effective, what guided its evolution?
In the case of deep learning, "how does it work?" will make you explain backpropagation and matrix multiplication. But "why does it work?" leads you to the structure of perceptual space.
In the case of a piece of music, "how does it work?" will make you look for the key, the different voices, the rules. That's the easy part. "Why" leads you to ask what exactly about the piece makes you feel the way you feel. It will require you to understand your own mind.
Read 5 tweets
20 Mar
Deep learning excels at unlocking the creation of impressive early demos of new applications using very little development resources.

The part where it struggles is reaching the level of consistent usefulness and reliability required by production usage.
Autonomous driving is the ultimate example. You could use deep learning to create an impressive self-driving car prototype in 2015 on a shoestring budget (Comma did exactly that, using Keras). Five years and billions of $ later, the best DL-centric driving systems are still L2+.
Every app demo based on GPT-3 follows this pattern. You can build the demo in a weekend, but if you invest $20M and 3 years fleshing out the app, it's unlikely it will still be using GPT-3 at all, and it may ever meet customer requirements
Read 4 tweets
13 Mar
Quick tweetorial: using KerasTuner to find good model configs.

Define your model as usual -- but put your code in a function that takes a `hp` (hyperparameters) argument.

Then, instead of using values like "embedding_dim = 512", use ranges: `hp.Int(...)`
Then, instantiate a tuner and pass it your model building function. It will need an `objective` to optimize -- this could the name be any metric found in the model logs. For built-in Keras metrics, the tuner will automatically pick whether to maximize or minimize the metric.
`max_trials` is the maximum number of model configurations to try. The ominous-sounding `executions_per_trial` is the number of model training runs to average for each model config: a higher value reduces results variance.
Read 4 tweets
7 Mar
Fun fact: if you wanted to keep an open-air swimming pool on the surface of Mars, you'd have to keep it heated at a temperature exactly between 0°C and 0.5°C (about 32°F). Because the atmospheric pressure on Mars is so low, water would boil if its temperature got any higher.
And any lower than that, it would freeze (which would be the default given that the surrounding atmosphere would be at around -60°C / -80°F)
Now, fun medical puzzle: if you took off your spacesuit on the surface of Mars, what would immediately happen to you? Would you...
Read 4 tweets
3 Mar
New code walkthrough on keras.io: speech recognition with Transformer. Very readable and concise demonstration of how to build and train a speech recognition model on the LJSpeech dataset.
This example was implemented by @NandanApoorv. Let's take a look at the model architecture.

It starts by defining two embedding layers: a positional embedding for text tokens, and an embedding for speech features, that uses 1D convolutions with strides for downsampling.
Then it defines a Transformer encoder, which is your usual Transformer block, as well as a Transformer decoder, which is also your usual Transformer block, but with causal attention to prevent later timesteps to influence the decoding of earlier timesteps.
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!