Simone Scardapane Profile picture
I fall in love with a new #machinelearning topic every month 🙄 Tenure-track Ass. Prof. @SapienzaRoma | Previously @iaml_it @SmarterPodcast | @GoogleDevExpert

Nov 3, 2021, 7 tweets

*Neural networks for data science* lecture 4 is out! 👇

aka "here I am talking about convolutional neural networks while everyone asks me about transformers"

/n

CNNs are a great way to show how considerations about the data can guide the design of the model.

For example, only assuming locality (and not transl. invariance) we get locally-connected networks.

/n

Everything else is a pretty standard derivation of CNN ideas (stride, global pooling, receptive field, ...).

/n

This year I have also included a few notable variants: 1x1 convs, depthwise models, 1D & 3D convolutions, masked (causal) layers, ...

I am pretty sure there are 10^4 index errors around. 😅

/n

I have also added some material discussing the relation w/ signal processing and the Fourier transform.

Will I be able to cover generalizations of these ideas to other domains? We'll see. 🤷‍♂️

/n

Several ideas expand on Chapter 7 from @D2L_ai. The geometric deep learning bits are instead from the fantastic GDL overview by @mmbronstein @joanbruna @TacoCohen @PetarV_93 :

arxiv.org/abs/2104.13478

/n

As always, all the material is on my website: sscardapane.it/teaching/nnds-…

Coming up soon: LOTS of tricks to train deeper neural networks.

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling