Petar Veličković Profile picture
Sep 17, 2020 8 tweets 7 min read Read on X
As requested , here are a few non-exhaustive resources I'd recommend for getting started with Graph Neural Nets (GNNs), depending on what flavour of learning suits you best.

Covering blogs, talks, deep-dives, feeds, data, repositories, books and university courses! A thread 👇
For blogs, I'd recommend:
- @thomaskipf's post on Graph Convolutional Networks:
tkipf.github.io/graph-convolut…
- My blog on Graph Attention Networks:
petar-v.com/GAT/
- A series of comprehensive deep-dives from @mmbronstein: towardsdatascience.com/graph-deep-lea…
For a comprehensive overview of the area in the form of a talk, I would highly recommend @xbresson's guest lecture at NYU's Deep Learning course:

For keeping up with the latest trends in graph representation learning, @SergeyI49013776 maintains a very useful Telegram feed: ttttt.me/graphML, as well as a recently-launched GRL newsletter: newsletter.ivanovml.com/issues/gml-new…
For access to the most recent strong GRL benchmark datasets, I would recommend the OGB (ogb.stanford.edu) by @weihua916 et al., and Benchmarking-GNNs: github.com/graphdeeplearn… by @vijaypradwi, @chaitjo et al.
For quickly getting started with GRL implementations, check out PyTorch Geometric by @rusty1s: github.com/rusty1s/pytorc… and DGL by @GraphDeep: dgl.ai

For a repository containing the most curated set of GRL papers, tutorials etc, check out: github.com/naganandy/grap…
For an awesome over-arching textbook resource on the entire field, consult the recent GRL book by @williamleif: cs.mcgill.ca/~wlh/grl_book/

For excellent university courses, check out CS224W by @jure: web.stanford.edu/class/cs224w/ and COMP 766 by @williamleif: cs.mcgill.ca/~wlh/comp766/
Any further resources I might have missed? Feel free to comment at any part of this thread.

Hope you'll find it useful! 😊

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Petar Veličković

Petar Veličković Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @PetarV_93

Dec 12, 2022
If you are @LogConference, come to the virtual Poster Session in ~20 minutes -- we have _four_ posters on algorithmic alignment, reasoning and over-squashing in GNNs! 🕸️🍾🌐 Several of them are award-winning!

You're welcome to stop by for a chat. 😊
See the 🧵for details... 🔢
🌐 In "Reasoning-Modulated Representations", Matko Bošnjak, @thomaskipf, @AlexLerchner, @RaiaHadsell, Razvan Pascanu, @BlundellCharles and I demonstrate how to leverage arbitrary algorithmic priors for self-supervised learning. It even transfers _across_ different Atari games!
🤖 In "Continuous Neural Algorithmic Planners", @heyu0208, @pl219_Cambridge, @andreeadeac22 and I show how the ideas from XLVIN paper can generalise to continuous-action-space environments (such as MuJoCo!). CNAP won the Best Paper Runner-up Award at GroundedML @ ICLR'22!
Read 5 tweets
Jul 27, 2022
📢 New & improved material to dive into geometric deep learning! 💠🕸️

We (@mmbronstein @joanbruna @TacoCohen) delivered our Master's course on GDL @AIMS_Next once again & we make all materials publicly available!

geometricdeeplearning.com/lectures/

See thread 🧵 for gems 💎 & dragons 🐉!
What to expect in the 2022 iteration?

We made careful modifications to our content, making it more streamlined & accessible!

Featuring a revamped introductory lecture, clearer discussion of Transformers & a new lecture going beyond groups, into the realm of category theory! 🐲
Beyond this, we offer a completely revamped set of exciting guest seminars, with @Francesco_dgv @ffabffrasca @crisbodnar @Russb09 & Geordie Williamson...

...and Colab tutorials on GDL from @crisbodnar @DutaIulia @paulmorio @_gabrielecesa_ @charlieharris01 @chaitjo & Ramon Viñas!
Read 5 tweets
Jun 2, 2022
Proud to share our CLRS benchmark: probing GNNs to execute 30 diverse algorithms! ⚡️

github.com/deepmind/clrs
arxiv.org/abs/2205.15659 (@icmlconf'22)

Find out all about our 2-year effort below! 🧵

w/ Adrià @davidmbudden @rpascanu @AndreaBanino Misha @RaiaHadsell @BlundellCharles
Why an algorithmic benchmark?

Algorithmic reasoning has emerged as a very important area of representation learning! Many key works (feat. @KeyuluXu @jingling_li @StefanieJegelka @beabevi_ @brunofmr) explored important theoretical and empirical aspects of algorithmic alignment.
Critically, each one of these works (incl. mine!) operates over its own datasets, often making it hard to directly compare insight among papers.

Further, generating adequate datasets requires knowledge of theoretical computer science, raising barrier of entry to the field.
Read 10 tweets
Jun 1, 2022
Two years ago, I embarked on an 'engineering' project.

From my perspective (research scientist with 'decent' coding skill), it seemed simple enough. It turned out anything but.

In advance of celebrating our @icmlconf acceptance, an appreciation thread for AI engineering! 1/11
Why did I class the project as simple at first?

It required no (apparent) novel research (though it could enable lots of new research!), I had the theoretical skills to understand everything that needs to be implemented, and it amounted to standard supervised learning! 2/11
So I started implementing by myself. What could possibly go wrong? Turns out, pretty much everything. :)

Indeed, I understood all I needed to write generators of the data. But this didn't mean I knew how to most efficiently extract it, organise it, and make it accessible! 3/11
Read 11 tweets
Mar 9, 2022
This is a very cool paper!

However, if I understood it correctly, it doesn't invalidate the GNN-DP alignment result of @KeyuluXu et al. [33].

Rather, it shows a very interesting DP unsolvability result over arbitrarily-initialised features. See thread -- happy to discuss. 1/4
GNN _computations_ align with DP. If you initialise the node features _properly_ (e.g. identifying the source vertex):

r[s] = 1, r[u] = 0 (for u =/= s)
d[s] = 0, d[u] = -1

GNNs are then perfectly capable of finding shortest paths. The proof in the paper seems more subtle... 2/4
Namely, that GNNs are hopeless in solving some DP problems (e.g. path-finding) under _arbitrary, fixed_ (e.g. constant / randomised) initialisations. But that's, in my opinion, making a different statement to "GNNs don't align with DP"! 3/4
Read 4 tweets
Jan 24, 2022
Geometric & Graph ML were a 2021 highlight, with exciting fundamental research & high-profile applications.

@mmbronstein and I interviewed distinguished experts to review this progress & predict 2022 trends. It's our longest post yet! See 🧵 for summary.

michael-bronstein.medium.com/predictions-an…
Trend #1: Geometry becomes increasingly important in ML. Quotes from Melanie Weber (@UniofOxford), @pimdehaan (@UvA_Amsterdam), @Francesco_dgv (@Twitter) and Aasa Feragen (@uni_copenhagen).
Trend #2: Message passing is still the dominant paradigm in GNNs. Quotes from @HaggaiMaron (@nvidia).
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(