Here's an overview of key adoption metrics for deep learning frameworks over 2020: downloads, developer surveys, job posts, scientific publications, Colab usage, Kaggle notebooks usage, GitHub data.

TensorFlow/Keras = #1 deep learning solution.
Note that we benchmark adoption vs Facebook's PyTorch because it is the only TF alternative that registers on the scale. Another option would have been sklearn, which has massive adoption, but it isn't really a TF alternative. In the future, I hope we can add JAX.
TensorFlow has seen 115M downloads in 2020, which nearly doubles its lifetime downloads. Note that this does *not* include downloads for all TF-adjacent packages, like tf-nightly, the old tensorflow-gpu, etc.
Also note that most of these downloads aren't from humans, but are automated downloads from CI systems (but none are from Google's systems, as Google doesn't use PyPI).

In a way, this metric reflects usage in production.
There were two worldwide developer surveys in 2020 that measured adoption of various frameworks: the one from StackOverflow, targeting all developers, and the one from Kaggle.
Note that the StackOverflow survey listed both TF and Keras; Keras had very strong metrics, and I suspect many people checked Keras without checking TF. So if "TF/Keras" was a choice, it would have significantly higher numbers here (probably around 15% overall usage).
Mentions in LinkedIn job posts is a metric that I'm not quite sure is meaningful, unfortunately. It doesn't reflect the stack of companies that hire, only the keywords tracked by recruiters.
We can track usage in the research community in two categories: ArXiv, which represents "pure deep learning" research, and Google Scholar, which represents all publications, including applications of deep learning to biology, medicine, etc.
Deep learning research is an important but small niche (~20k users of deep learning out of several millions in total) and it is the only niche where PyTorch is neck-to-neck with TensorFlow.
Finally, GitHub metrics. GitHub makes it possible to track new commits over the last year, but doesn't make it possible to track new stars/forks/watchers, hence why I'm displaying total numbers for these rather than 2020 increases.
Note that the GitHub metrics are only for the TensorFlow repo, not the dozens of large TensorFlow adjacent repos (like the Keras repo, etc).
Overall: 2020 has been a difficult year, in particular one during which many businesses have cut their exploratory investments in deep learning because of Covid, causing a slump from March to November. However, on balance, TF/Keras has still seen modest growth over the year.
Our current growth rate is solid, and our prospects for 2021 are looking bright! I'll post an update to these metrics in 2021. Here's to another year full of improvement, growth, and focusing on delighting our users :)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with François Chollet

François Chollet Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @fchollet

6 Jan
DALL-E is the kind of application that you'd expect deep learning to be able to pull off in theory (people have been building various early prototype of text-guided image generation since 2015) that becomes really magical when done at a crazy scale.
As usual with deep learning, scaling up is paying off.
In the future, we'll have applications that generate photorealistic movies from a script, or new video games from a description. It's only a matter of years at this point.
Read 5 tweets
4 Jan
Here's a word-level text generation example with LSTM, starting from raw text files, in less than 50 lines of Keras & TensorFlow. colab.research.google.com/drive/1B9yLXcJ…
Of course, I should point out it's not 50 lines because Keras has some kind of built-in solution for text generation (it doesn't). It's 50 lines because Keras makes it easy to implement anything. It only uses generic features.
It uses a utility to read text files, a text vectorization layer (useful for any NLP), the LSTM layer and the functional API, the callbacks infrastructure, and the default training loop.
Read 4 tweets
1 Jan
The thing with pointing out "AI can't do X!" is that, if you keep refining X into something narrow and precise enough, you'll eventually cross a threshold where a realistic amount of engineering and training data make X possible.
AI can always do *specific* things -- as long as they're sufficiently specific and you're investing sufficient effort / data.

The problem with AI isn't that it can't do a specific X, it's that it has basically no intelligence at all at this time. No general cognitive abilities.
Intelligence simply means moving to a different part of the specificity / effort spectrum, one where you can master broad tasks with little effort.

You can always make up for a lack of intelligence by reducing task uncertainty (making X more specific) or investing more effort.
Read 5 tweets
31 Dec 20
The Turing test was *never* a relevant goal for AI. We should remember that Turing never intended it as a literal test to be passed by a machine designed for that purpose, but as a philosophical device in an argument about the nature of thinking.

fastcompany.com/90590042/turin…
The major flaw of the Turing test is that it entirely abdicates the responsibility of defining intelligence and how to evaluate it (the value of a test). Instead, it delegates the task to human judges, who themselves don't have a proper definition or a proper evaluation process.
As a result, the Turing test does not at all provide incentives to develop greater intelligence, it solely encourages developers to figure out how to trick humans into believing a chatbot is intelligent.
Read 5 tweets
26 Dec 20
I keep coming back to the importance of self-image in one's life trajectory. You become who you believe you are. You do what you believe you can do.
Belief is a greater determinant than ability or environment.
"Man often becomes what he believes himself to be. If I keep on saying to myself that I cannot do a certain thing, it is possible that I may end by really becoming incapable of doing it...."
Read 5 tweets
16 Dec 20
Having to figure things out by yourself is extraordinarily inefficient (plus, risky). The primary benefit of civilization is curriculum optimization: getting you to the right destination while expending the least amount of experience. Civilization is integral to human cognition.
To caricature, you could say that the human brain is merely a short-lived mirror of what constitutes the main body of human cognition: the thought patterns, behaviors, and systems we've collectively evolved over thousands of years.
Your mind reflects the civilization that shaped it -- it wouldn't amount to much without it.
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!