I firmly believe in giving back to the community I came from, as well as paying forward and making (geometric) deep learning more inclusive to underrepresented communities in general.
Accordingly, this summer you can (virtually) find me on several summer schools! A thread (1/9)
At @EEMLcommunity 2021, I will give a lecture on graph neural networks from the ground up, followed by a GNN lab session led by @ni_jovanovic. I will also host a mentorship session with several aspiring mentees!
Based on 2020, I anticipate a recording will be available! (2/9)
This will closely follow our recently-released proto-book and we hope to make materials more broadly available. (3/9)
At this year's PSI:ML seminar (powered by @MicrosoftSrbija et al.), I will deliver a longer introductory lecture on GNNs and geometric DL.
PSI:ML is the prime machine learning summer school for undergrads in my home country, and I am honoured to be invited to speak there. (4/9)
Lastly, I joined @logml2021 as a project mentor (steering a mini-project which unifies graph neural networks, classical algorithms, and text adventures). I found the opportunity of tightly collaborating with attendees very attractive, and I look forward to the result! (5/9)
Why do all of these? Here are a few reasons:
My researcher career was largely made possible by individuals and organisations---in Serbia, the UK and Canada---who offered me their support, either motivational, inspirational or financial, when I had no results to justify it. (6/9)
It is wonderful to be able to give such support to aspiring students, either in the community you came from or more widely.
If you get such an opportunity, seize it! Not only might you positively affect somebody's future career, you're likely to gain amazing collaborators. (7/9)
But besides the opportunity to empower, I've found that participating in seminars consistently improved my ability to communicate science, which, surprise-surprise, results in better-written papers. :)
The best way to understand something is to explain it to somebody else! (8/9)
Finally, it's usually *not* too early to think about contributing in this way.
I started by asking to give the occasional seminar about computer science in my old high school, while I was still in my undergrad. Opportunities tend to magnify at PhD student level & beyond! (9/9)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Proud to share our 150-page "proto-book" with @mmbronstein@joanbruna@TacoCohen on geometric DL! Through the lens of symmetries and invariances, we attempt to distill "all you need to build the architectures that are all you need".
We have investigated the essence of popular deep learning architectures (CNNs, GNNs, Transformers, LSTMs) and realised that, assuming a proper set of symmetries we would like to stay resistant to, they can all be expressed using a common geometric blueprint.
But there's more!
Going further, we use our blueprint on less standard domains (such as homogeneous groups and manifolds), showing that the blueprint allows for nicely expressing recent advances in those areas, such as Spherical CNNs, SO(3)-Transformers, and Gauge-Equivariant Mesh CNNs.
The crowd has spoken! 🙃 A thread with early-stage machine learning research advice follows below. 👇🧵
Important disclaimer before proceeding: these are my personal views only, and likely strongly biased by my experiences and temperament. Hopefully useful nonetheless! 1/15
During the early stages of my PhD, one problem would often arise: I would come up with ideas that simply weren't the right kind of idea for the kind of hardware/software/expertise setup I had in my department. 2/15
This would lead me on 'witch hunts' that took months (sometimes forcing me to spend my own salary on compute!). Game-changer for me was corresponding w/ researchers that are influential to the work I'd like to do: first learn from their perspectives, eventually internships. 3/15
Over the past weeks, several people have reached out to me for comment on "Combining Label Propagation and Simple Models Out-performs Graph Neural Networks" -- a very cool LabelProp-based baseline for graph representation learning. Here's a thread 👇 1/14
Firstly, I'd like to note that, in my opinion, this is a very strong and important work for representation learning on graphs. It provides us with so many lightweight baselines that often perform amazingly well -- on that, I strongly congratulate the authors! 2/14
I think most of the discussion comes from the title -- most people reaching out to me ask "Does this mean we don't need GNNs at all?", "Have GNNs been buried?", etc.
In reality, this work reinforces something we've known in graph representation learning for quite some time. 3/14
As requested , here are a few non-exhaustive resources I'd recommend for getting started with Graph Neural Nets (GNNs), depending on what flavour of learning suits you best.
Covering blogs, talks, deep-dives, feeds, data, repositories, books and university courses! A thread 👇