Here is Mask R-CNN, the most popular architecture used for object detection and segmentation.
The conceptual principle of the R-CNN family is to use a two-step process for object detection: 1) a Region Proposal Network (RPN) identifies regions of interests(ROIs) 2) The ROIs are cut from the image and fed through a classifier.
In fact, the cutting is not done the original image but directly on the feature maps extracted from the backbone. Since the feature maps are much lower resolution than the image, the cropping requires some care: sub-pixel extraction and interpolation aka. "ROI alignment".
This makes Mask R-CNN effectively single-pass !
Extracted features are then used for classification and segmentation. To generate segmentation masks, the architecture uses "transpose convolutions", marked as "deconv." in the architecture diagram.
The MobileNet family of convolutional architectures uses depth-wise convolutions where the channels of the input are convolved independently.
Their basic building block is called the "Inverted Residual Bottleneck", compared here with the basic blocks in ResNet and Xception (dw-conv for depth-wise convolution).
Here is MobileNetV2, optimized for low weight count and fast inference.
Now reading the ARC paper by @fchollet. arxiv.org/abs/1911.01547 “On the measure of intelligence” where he proposes a new benchmark for “intelligence” called the “Abstraction and Reasoning corpus”.
Highlights below ->
@fchollet Chess was considered the pinnacle of human intelligence, … until it was solved by a computer and surpassed Garry Kasparov in 1997. Today, it is hard to argue that a min-max algorithm with optimizations represents “intelligence”.
@fchollet AlphaGo took this to the next step. It became world champion at Go by using deep learning. Still, the program is narrowly focused on playing Go and solving this task did not lead to breakthroughs in other fields.
Google Cloud Platform now has preconfigured deep learning images with Tensorflow, PyTorch, Jupyter, Cuda and CuDNN already installed. It took me some time to figure out how to start Jupyter on such an instance. Turns out it's a one liner:
Detailed instructions: 1) Go to cloud.google.com/console and create an instance (pick the Tensorflow deep learning image and a powerful GPU)
2) Ssh into your instance using the "gcloud compute ssh" command in the pic (there will be additional install prompts to accept and a reboot on the first connection. Relaunch the command after that to reconnect). Replace PROJECT_NAME and INSTANCE_NAME with your own values.
I believe a dev can get up to speed on neural networks in 3h and then learn by himself. Ready for a crash course? /1
Got 3 more hours ? The "Tensorflow without a PhD" series continues. First a deep dive into modern convolutional architectures: .
This session walks you through the construction of a neural network that can spot airplanes in aerial imagery. A good place to start for software devs who know some basics (relu, softmax, ...) and want to see a real model built from scratch.