The MobileNet family of convolutional architectures uses depth-wise convolutions where the channels of the input are convolved independently.
Their basic building block is called the "Inverted Residual Bottleneck", compared here with the basic blocks in ResNet and Xception (dw-conv for depth-wise convolution).
Here is MobileNetV2, optimized for low weight count and fast inference.
And, from the same family of architectures, EfficientNetB0. Very similar but obtained through an automated neural architecture search. Notice the 5x5 convolutions.

Illustrations from "Practical ML for Computer vision"…

• • •

Missing some Tweet in this thread? You can try to force a refresh

Keep Current with Martin Görner

Martin Görner Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!


Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @martin_gorner

28 Jun
I made a ton of ML architecture illustrations for an upcoming book. Starting with good old Alex Net

The book:… by @lak_gcp, Ryan Gillard and myself.
and just as good and old VGG19:
Here is a SqueezeNet module.The pape rcalls them "fire
🔥 modules"
Read 4 tweets
28 Nov 19
Now reading the ARC paper by @fchollet. “On the measure of intelligence” where he proposes a new benchmark for “intelligence” called the “Abstraction and Reasoning corpus”.
Highlights below ->
@fchollet Chess was considered the pinnacle of human intelligence, … until it was solved by a computer and surpassed Garry Kasparov in 1997. Today, it is hard to argue that a min-max algorithm with optimizations represents “intelligence”.
@fchollet AlphaGo took this to the next step. It became world champion at Go by using deep learning. Still, the program is narrowly focused on playing Go and solving this task did not lead to breakthroughs in other fields.
Read 32 tweets
13 Sep 18
Google Cloud Platform now has preconfigured deep learning images with Tensorflow, PyTorch, Jupyter, Cuda and CuDNN already installed. It took me some time to figure out how to start Jupyter on such an instance. Turns out it's a one liner:
Detailed instructions:
1) Go to and create an instance (pick the Tensorflow deep learning image and a powerful GPU)
2) Ssh into your instance using the "gcloud compute ssh" command in the pic (there will be additional install prompts to accept and a reboot on the first connection. Relaunch the command after that to reconnect). Replace PROJECT_NAME and INSTANCE_NAME with your own values.
Read 7 tweets
19 Jan 17
I believe a dev can get up to speed on neural networks in 3h and then learn by himself. Ready for a crash course? /1
Got 3 more hours ? The "Tensorflow without a PhD" series continues. First a deep dive into modern convolutional architectures: .
This session walks you through the construction of a neural network that can spot airplanes in aerial imagery. A good place to start for software devs who know some basics (relu, softmax, ...) and want to see a real model built from scratch.
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!