Scalable Message Passing Neural Networks: No Need for Attention in Large Graph Representation Learning

Type: Preprint

Publication Date: 2024-10-29

Citations: 0

DOI: https://doi.org/10.48550/arxiv.2411.00835

Abstract

We propose Scalable Message Passing Neural Networks (SMPNNs) and demonstrate that, by integrating standard convolutional message passing into a Pre-Layer Normalization Transformer-style block instead of attention, we can produce high-performing deep message-passing-based Graph Neural Networks (GNNs). This modification yields results competitive with the state-of-the-art in large graph transductive learning, particularly outperforming the best Graph Transformers in the literature, without requiring the otherwise computationally and memory-expensive attention mechanism. Our architecture not only scales to large graphs but also makes it possible to construct deep message-passing networks, unlike simple GNNs, which have traditionally been constrained to shallow architectures due to oversmoothing. Moreover, we provide a new theoretical analysis of oversmoothing based on universal approximation which we use to motivate SMPNNs. We show that in the context of graph convolutions, residual connections are necessary for maintaining the universal approximation properties of downstream learners and that removing them can lead to a loss of universality.

Locations

  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ AMPNet: Attention as Message Passing for Graph Neural Networks 2022 Syed Asad Rizvi
Nhi Nguyen
Haoran Lyu
Ben Christensen
Josue Ortega Caro
Emanuele Zappala
Maria Brbić
Rahul M. Dhodapkar
David van Dijk
+ PDF Chat Masked Attention is All You Need for Graphs 2024 David Buterez
Jon Paul Janet
Dino Oglić
Píetro Lió
+ Improving Subgraph-GNNs via Edge-Level Ego-Network Encodings 2023 Nurudin Alvarez-Gonzalez
Andreas Kaltenbrunner
Vicenç Gómez
+ PDF Chat A Flexible, Equivariant Framework for Subgraph GNNs via Graph Products and Graph Coarsening 2024 Guy Bar-Shalom
Yam Eitan
Fabrizio Frasca
Haggai Maron
+ Hierarchical Message-Passing Graph Neural Networks 2020 Zhiqiang Zhong
Cheng–Te Li
Jun Pang
+ PDF Chat Uniting Heterogeneity, Inductiveness, and Efficiency for Graph Representation Learning 2021 Tong Chen
Hongzhi Yin
Jie Ren
Zi Huang
Xiangliang Zhang
Hao Wang
+ PDF Chat Hierarchical message-passing graph neural networks 2022 Zhiqiang Zhong
Cheng–Te Li
Jun Pang
+ Pure Transformers are Powerful Graph Learners 2022 Jinwoo Kim
Tien Dat Nguyen
Seonwoo Min
Sungjun Cho
Moontae Lee
Honglak Lee
Seunghoon Hong
+ GraphGLOW: Universal and Generalizable Structure Learning for Graph Neural Networks 2023 Wentao Zhao
Qitian Wu
Chenxiao Yang
Junchi Yan
+ GMLP: Building Scalable and Flexible Graph Neural Networks with Feature-Message Passing 2021 Wentao Zhang
Yu Shen
Zheyu Lin
Yang Li
Xiao‐Sen Li
Wen Ouyang
Yangyu Tao
Zhi Yang
Bin Cui
+ PDF Chat Subgraphormer: Unifying Subgraph GNNs and Graph Transformers via Graph Products 2024 Guy Bar-Shalom
Beatrice Bevilacqua
Haggai Maron
+ On Provable Benefits of Depth in Training Graph Convolutional Networks 2021 Weilin Cong
Morteza Ramezani
Mehrdad Mahdavi
+ On Provable Benefits of Depth in Training Graph Convolutional Networks 2021 Weilin Cong
Morteza Ramezani
Mehrdad Mahdavi
+ Gradient Gating for Deep Multi-Rate Learning on Graphs 2022 T. Konstantin Rusch
Benjamin Paul Chamberlain
Michael W. Mahoney
Michael M. Bronstein
Siddhartha Mishra
+ Walking Out of the Weisfeiler Leman Hierarchy: Graph Learning Beyond Message Passing 2021 Jan Tönshoff
Martin Ritzert
Hinrikus Wolf
Martin Grohe
+ DeeperGCN: All You Need to Train Deeper GCNs 2020 Guohao Li
Chenxin Xiong
Ali Thabet
Bernard Ghanem
+ Expander Graph Propagation 2022 Andreea Deac
Marc Lackenby
Petar Veličković
+ Learnable Graph Convolutional Attention Networks 2022 Adrián Javaloy
Pablo Sánchez‐Martín
Amit Levi
Isabel Valera
+ Simple yet Effective Gradient-Free Graph Convolutional Networks 2023 Yulin Zhu
Xing Ai
Qimai Li
Xiao-Ming Wu
Kai Zhou
+ PDF Chat Constant Time Graph Neural Networks 2022 Ryoma Sato
Makoto Yamada
Hisashi Kashima

Works That Cite This (0)

Action Title Year Authors

Works Cited by This (0)

Action Title Year Authors