Five papers have been accepted to #ICLR2013 in my group (including one oral presentation), covering topics from combining pretrained LMs with GNNs, deep generative models and pretraining methods for drug discovery.
1) An oral presentation. We proposed an effective and efficient method for combining pretrained LLMs and GNNs on large-scale text-attributed graphs via variational EM. The first place on 3 tasks of node property prediction on OGB leaderboards.
2) An end-to-end diffusion model for protein sequence and structure co-design, which iteratively refines sequences and structures through a denoising network.
3) An encoder-decoder framework for protein-ligand docking. An encoder is used to encode the representations of protein 3D structure and molecular graph and their interactions, and a diffusion network for predicting the complex structure.
4) We pretrained a geometry protein encoder based on the experimental structures in PDB and the predicted structures by AlphaFold 2. It outperforms the sequence-based pretraining methods with protein LMs.
Five papers related to GNNs are accepted to #NeurIPS2021 in my group, ranging from knowledge graph reasoning, drug discovery, scene graph generation and algorithmic reasoning. Congrats to all my students and collaborators.
1) Zhu et al. Neural Bellman-Ford Networks: A General Graph Neural Network Framework for Link Prediction. arxiv.org/abs/2106.06935
A state-of-the-art algorithm for link prediction based on GNNs in both transductive and inductive settings.
2) Luo and Shi et al. Predicting Molecular Conformation via Dynamic Graph Score Matching.
A new approach for molecular conformation generation by modeling both short- and long-range interactions between atoms.