I firmly believe in giving back to the community I came from, as well as paying forward and making (geometric) deep learning more inclusive to underrepresented communities in general.

Accordingly, this summer you can (virtually) find me on several summer schools! A thread (1/9)
At @EEMLcommunity 2021, I will give a lecture on graph neural networks from the ground up, followed by a GNN lab session led by @ni_jovanovic. I will also host a mentorship session with several aspiring mentees!

Based on 2020, I anticipate a recording will be available! (2/9)
Alongside @mmbronstein @joanbruna @TacoCohen, I will be co-hosting a course on Geometric Deep Learning for the African Master of Machine Intelligence @AIMS_Next.

This will closely follow our recently-released proto-book and we hope to make materials more broadly available. (3/9)
At this year's PSI:ML seminar (powered by @MicrosoftSrbija et al.), I will deliver a longer introductory lecture on GNNs and geometric DL.

PSI:ML is the prime machine learning summer school for undergrads in my home country, and I am honoured to be invited to speak there. (4/9)
Lastly, I joined @logml2021 as a project mentor (steering a mini-project which unifies graph neural networks, classical algorithms, and text adventures). I found the opportunity of tightly collaborating with attendees very attractive, and I look forward to the result! (5/9)
Why do all of these? Here are a few reasons:

My researcher career was largely made possible by individuals and organisations---in Serbia, the UK and Canada---who offered me their support, either motivational, inspirational or financial, when I had no results to justify it. (6/9)
It is wonderful to be able to give such support to aspiring students, either in the community you came from or more widely.

If you get such an opportunity, seize it! Not only might you positively affect somebody's future career, you're likely to gain amazing collaborators. (7/9)
But besides the opportunity to empower, I've found that participating in seminars consistently improved my ability to communicate science, which, surprise-surprise, results in better-written papers. :)

The best way to understand something is to explain it to somebody else! (8/9)
Finally, it's usually *not* too early to think about contributing in this way.

I started by asking to give the occasional seminar about computer science in my old high school, while I was still in my undergrad. Opportunities tend to magnify at PhD student level & beyond! (9/9)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Petar Veličković

Petar Veličković Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @PetarV_93

28 Apr
Proud to share our 150-page "proto-book" with @mmbronstein @joanbruna @TacoCohen on geometric DL! Through the lens of symmetries and invariances, we attempt to distill "all you need to build the architectures that are all you need".

geometricdeeplearning.com

More info below! 🧵
We have investigated the essence of popular deep learning architectures (CNNs, GNNs, Transformers, LSTMs) and realised that, assuming a proper set of symmetries we would like to stay resistant to, they can all be expressed using a common geometric blueprint.

But there's more!
Going further, we use our blueprint on less standard domains (such as homogeneous groups and manifolds), showing that the blueprint allows for nicely expressing recent advances in those areas, such as Spherical CNNs, SO(3)-Transformers, and Gauge-Equivariant Mesh CNNs.
Read 5 tweets
24 Apr
The crowd has spoken! 🙃 A thread with early-stage machine learning research advice follows below. 👇🧵

Important disclaimer before proceeding: these are my personal views only, and likely strongly biased by my experiences and temperament. Hopefully useful nonetheless! 1/15
During the early stages of my PhD, one problem would often arise: I would come up with ideas that simply weren't the right kind of idea for the kind of hardware/software/expertise setup I had in my department. 2/15
This would lead me on 'witch hunts' that took months (sometimes forcing me to spend my own salary on compute!). Game-changer for me was corresponding w/ researchers that are influential to the work I'd like to do: first learn from their perspectives, eventually internships. 3/15
Read 15 tweets
16 Nov 20
Over the past weeks, several people have reached out to me for comment on "Combining Label Propagation and Simple Models Out-performs Graph Neural Networks" -- a very cool LabelProp-based baseline for graph representation learning. Here's a thread 👇 1/14
Firstly, I'd like to note that, in my opinion, this is a very strong and important work for representation learning on graphs. It provides us with so many lightweight baselines that often perform amazingly well -- on that, I strongly congratulate the authors! 2/14
I think most of the discussion comes from the title -- most people reaching out to me ask "Does this mean we don't need GNNs at all?", "Have GNNs been buried?", etc.

In reality, this work reinforces something we've known in graph representation learning for quite some time. 3/14
Read 14 tweets
17 Sep 20
As requested , here are a few non-exhaustive resources I'd recommend for getting started with Graph Neural Nets (GNNs), depending on what flavour of learning suits you best.

Covering blogs, talks, deep-dives, feeds, data, repositories, books and university courses! A thread 👇
For blogs, I'd recommend:
- @thomaskipf's post on Graph Convolutional Networks:
tkipf.github.io/graph-convolut…
- My blog on Graph Attention Networks:
petar-v.com/GAT/
- A series of comprehensive deep-dives from @mmbronstein: towardsdatascience.com/graph-deep-lea…
For a comprehensive overview of the area in the form of a talk, I would highly recommend @xbresson's guest lecture at NYU's Deep Learning course:

Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(