François Chollet Profile picture
Sep 19, 2022 9 tweets 3 min read Read on X
Stable Diffusion is now available as a Keras implementation, thanks to @divamgupta!

Colab: colab.research.google.com/drive/1zVTa4mL…

Original repo: github.com/divamgupta/sta…

This port has several advantages: (thread) Image
1. It's extremely readable. Go check out the code yourself! It's only about 500 LoC. I recommend this fork I started which makes the code more idiomatic & adds performance improvements (though the code quality of the original was excellent to start with):
github.com/fchollet/stabl…
2. It's fast. How much faster? It depends on your hardware (try switching on `jit_compile` to see if you can get a greater speedup that way). Benchmark it on your system and let me know!
3. It works out of the box on M1 MacBooPros GPUs. Just install the proper requirements `requirements_m1.txt` and get going.

(This required no extra work on the repo.)
4. It can do TPU inference out of the box: just get a TPU VM and add a TPU strategy scope to the code. This can yield a dramatic speedup (and cost reduction) when doing large-batch inference.
5. It can also do multi-GPU inference out of the box (same, with a MirroredStrategy scope). It just... works.
6. You can export the underlying 3 Keras models to TFLite and TF.js.

This means that you can create AI art apps that run on the device (in the browser, with local GPU acceleration, or on an Android / iOS device, also with local hardware acceleration). No server costs!
Sounds exciting? Go try it, fork it, hack it. And if you want to learn more about what it looks like to work with Keras and TensorFlow, go read the code: github.com/fchollet/stabl…
Huge thanks to @divamgupta for creating this port! This is top-quality work that will benefit everyone doing creative AI.

I'm always amazed by the velocity of the open-source community 👍

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with François Chollet

François Chollet Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @fchollet

Oct 26
In the last Trump administration, legal, high-skilled immigration was cut by ~30% before Covid, then by 100% after Covid (which was definitely a choice: a number of countries kept issuing residency permits and visas). However illegal immigrant inflows did not go down (they've been stable since the mid-2000s).
If you're a scientist or engineer applying for a green card, you're probably keenly aware that your chances of eventually obtaining it are highly dependent on the election. What you may not know is that, if you're a naturalized citizen, your US passport is also at stake
The last Trump administration launched a "denaturalization task force" aiming at taking away US citizenship from as many naturalized citizens as possible, with an eventual target of 7M (about one third of all naturalized citizens). Thankfully, they ran into a little problem: the courts.
Read 10 tweets
Oct 20
When we say deep learning models operate via memorization, the claim isn't that they work like literal lookup tables, only being able to make sense of points that are exactly part of their training data. No one has claimed that -- it wouldn't even be true of linear regression.
Of course deep learning models can generalize to unseen data points -- they would be entirely useless if they couldn't. The claim is that they perform *local generalization*: generalization to known unknowns, to degrees of variability for which you can provide a dense sampling at training time.
If you take a problem that is known to be solvable by expert humans via pure pattern recognition (say, spotting the top move on a chess board) and that has been known to be solvable via convnets as far back as 2016, and you train a model on ~5B chess positions across ~10M games, and you find that the model can solve the problem at the level of a human expert, that isn't an example of out-of-distribution generalization. That is an example of local generalization -- precisely the thing you expect deep learning to be able to do.
Read 6 tweets
Jun 22
Fact check: my 3-year old builds Lego sets (age 5+ ones) on his own by following the instruction booklet. He started doing it before he turned 3 -- initially he needed externally provided error correction and guidance, but now he's just fully autonomous. Can't handle sets for ages 8+ yet though. We'll see what he does at 5.
He also builds his own ideas, which feature minor original inventions. Like this "jeep" which has a spare tire on the back -- not something he saw in any official set. Lego is the best toy ever by the way
Image
Image
Or this Lego garden (fresh from today). It has a hut with a cool door. It looks chaotic, but everything on here has a purpose. Everything is intended to be something (the tire on a stick is a tree, the tiny cone on the ground is a water sprinkler...)
Image
Image
Read 4 tweets
Jun 11
I'm partnering with @mikeknoop to launch ARC Prize: a $1,000,000 competition to create an AI that can adapt to novelty and solve simple reasoning problems.

Let's get back on track towards AGI.

Website:

ARC Prize on @kaggle: arcprize.org
kaggle.com/competitions/a…
I published the ARC benchmark over 4 years ago. It was intended to be a measure of how close we are to creating AI that can reason on its own – not just apply memorized patterns.
ARC tasks are easy for humans. They aren't complex. They don't require specialized knowledge – a child can solve them. But modern AI struggles with them.

Because they have one very important property: they're designed to be resistant to memorization.
Image
Image
Read 8 tweets
May 14
It's amazing to me that the year is 2024 and some people still equate task-specific skill and intelligence. There is *no* specific task that cannot be solved *without* intelligence -- all you need a sufficiently complete description of the task (removing all test-time novelty and uncertainty), and you can achieve arbitrary levels of skills while entirely by-passing the problem of intelligence. In the limit, even a simple hashtable can be superhuman at anything.Image
The "AI" of today still has near-zero (though not exactly zero) intelligence, despite achieving superhuman skill at many tasks.

Here's one thing that AI won't be able to do within five years (if you extrapolate from the excruciatingly slow progress of the past 15 years): acquiring new skills as efficiently as humans, using the same data. The ARC benchmark is an attempt at measuring roughly that.
The point of general intelligence is to make it possible to deal with novelty and uncertainty, which is what our lives are made of. Intelligence is the ability to improvise and adapt in the face of situations you weren't prepared for (either by your evolutionary history or by your past experience) -- to efficiently acquire skills at novel tasks, on the fly.
Read 5 tweets
Apr 28
Many of the people who are concerned with falling birthrates aren't willing to consider the set policies that would address the problem -- aggressive tax breaks for families, free daycare, free education, free healthcare, and building more/denser housing to slash the price of homes.

Most people want children, but can't afford them.
I always found it striking how very rich couples (50M+ net worth) all tend to have over 3 children (and often many more). And how young women always say they want children -- yet in practice they delay family building because they are forced to focus on financial stability and therefore career. When money is not an object, families have 3+ children.
For middle incomes (below 1M/year) fertility goes down as income goes up, because *the cost of raising children increases with income* due to *opportunity cost*. If you make $150k and stand to eventually grow to $300k, you are losing a lot of money by quitting your job to raise children (on top of the prohibitive cost of raising children -- which also goes up as your incomes and thus standards go up). You are thus *more* likely to postpone having children.

Starting at 1M/year, fertility rates rise again. And couples that make 5+M/year get to have the number of children they actually want -- which is almost always more than 3, and quite often 5+.Image
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(