Citation: | LIN Jingjing, YE Zhonglin, ZHAO Haixing, et al. “DeepHGNN: A Novel Deep Hypergraph Neural Network”. Chinese Journal of Electronics, vol. 31 no. 5. doi: 10.1049/cje.2021.00.108 |
[1] |
J. Bruna, W. Zaremba, A. Szlam, et al., “Spectral networks and locally connected networks on graphs,” available at: https://arxiv.org/pdf/1312.6203.pdf, 2014-5-21.
|
[2] |
P. Veličković, G. Cucurull, A. Casanova, et al., “Graph attention networks,” available at: https://arxiv.org/abs/1710.10903v1.pdf, 2018-2-4.
|
[3] |
Z. L. Ye, H. X. Zhao, Y. Zhu, et al., “HSNR: A network representation learning algorithm using hierarchical structure embedding,” Chinese Journal of Electronics, vol.29, no.6, pp.1141–1152, 2019.
|
[4] |
T. N. Kipf and M. Welling, “Variational graph auto-encoders,” available at: https://arxiv.org/pdf/1611.07308.pdf, 2016-11-21.
|
[5] |
M. H. Zhang and Y. X. chen, “Link prediction based on graph neural networks,” Thirty-second Conference on Neural Information Processing Systems, Montréal, Canada, pp.5171–5181, 2018.
|
[6] |
Z. L. Ye, H. X. Zhao, K. Zhang, et al., “Tri-party deep network representation learning using inductive matrix completion,” Journal of Central South University, vol.26, no.10, pp.2746–2758, 2019. doi: 10.1007/s11771-019-4210-8
|
[7] |
Z. H. Wu, S. R. Pan, F. W. Chen, et al., “A comprehensive survey on graph neural networks,” IEEE Transactions on Neural Networks and Learning Systems, vol.32, no.1, pp.4–24, 2021. doi: 10.1109/TNNLS.2020.2978386
|
[8] |
B. B. Xu, K. T. Cen, J. J. Huang, et al., “A survey on graph convolutional neural network,” Chinese Journal of Computers, vol.43, no.5, pp.755–780, 2020. (in Chinese)
|
[9] |
Y. F. Feng, H. X. You, Z. Z. Zhang, et al., “Hypergraph neural networks,” in Proceedings of the AAAI Conference on Artificial Intelligence, Vol.33, No.01, pp.3558–3565, 2019.
|
[10] |
J. W. Jiang, Y. X. Wei, Y. F. Feng, et al., “Dynamic hypergraph neural networks,” in Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, Macao, China, pp.2635–2641, 2019.
|
[11] |
S. Bai, F. H. Zhang, and P. H. S Torr, “Hypergraph convolution and hypergraph attention,” Pattern Recognition, vol.110, article no.107637, 2021. doi: 10.1016/j.patcog.2020.107637
|
[12] |
N.Yadati, M.Nimishakavi, P.Yadav, et al., “HyperGCN: A new method of training graph convolutional networks on hypergraphs,” Thirty-third Conference on Neural Information Processing Systems, Vancouver, Canada, pp.1509–1520, 2019.
|
[13] |
R. C. Zhang, Y. S. Zou and J. Ma, “Hyper-SAGNN: A self-attention based graph neural network for hypergraphs,” International Conference on Learning Representations, Addis Ababa, Ethiopia, pp.1–7, 2020.
|
[14] |
J. Yi and J. Park, “Hypergraph convolutional recurrent neural network,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Virtual Event, pp.3366–3376, 2020.
|
[15] |
X. G. Sun, H. Z. Yin, B. Liu, et al., “Heterogeneous hypergraph embedding for graph classification,” available at: https://arxiv.org/pdf/2010.10728.pdf, 2021-1-18.
|
[16] |
E. S. Kim, W. Y. Kang, K. W. On, et al., “Hypergraph attention networks for multimodal learning,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, USA, pp.14569–14578, 2020.
|
[17] |
Y. B. Zhang, N. Wang, Y. F. Chen, et al., “Hypergraph label propagation network,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol.34, no.04, pp.6885–6892, 2020.
|
[18] |
X. P. Wu, Q. C. Chen, W. Li, et al., “AdaHGNN: Adaptive hypergraph neural networks for multi-label image classification,” in Proceedings of the 28th ACM International Conference on Multimedia, Seattle, USA, pp.284–293, 2020.
|
[19] |
M. Lostar and I. Rekik, “Deep hypergraph U-Net for brain graph embedding and classification,” available at: https://arxiv.org/pdf/2008.13118.pdf, 2020-8-30.
|
[20] |
M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional neural networks on graphs with fast localized spectral filtering,” in Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, pp.3844–3852, 2016.
|
[21] |
T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” available at: https://arxiv.org/pdf/1609.02907v4.pdf, 2017-2-22.
|
[22] |
R. Y. Li, S. Wang, F. Y. Zhu, et al., “Adaptive graph convolutional neural networks,” Proceedings of the AAAI Conference on Artificial Intelligence, vol.32, no.1, pp.3546–3553, 2018.
|
[23] |
C. Y. Zhuang and Q. Ma, “Dual graph convolutional networks for graph-based semi-supervised classification,” in Proceedings of the 2018 World Wide Web Conference, Lyon, France, pp.499–508, 2018.
|
[24] |
F. Wu, A. H. Souza, T. Y. Zhang, et al., “Simplifying graph convolutional networks,” Proceedings of the 36th International Conference on Machine Learning, Vol.97, pp.6861–6871, 2019.
|
[25] |
Y. Feng, M. Gai, F. H. Wang, et al., “Classification and early warning model of terrorist attacks based on optimal GCN,” Chinese Journal of Electronics, vol.29, no.6, pp.1193–1200, 2020. doi: 10.1049/cje.2020.10.005
|
[26] |
F. Lyu, L. Y. Li, Q. M. Fu, et al., “Multi-label image classification via Coarse-to-Fine attention,” Chinese Journal of Electronics, vol.28, no.6, pp.1118–1126, 2019. doi: 10.1049/cje.2019.07.015
|
[27] |
J. N. Zhang, X. J. Shi, J. Y. Xie, et al., “GaAN: Gated attention networks for learning on large and spatiotemporal graphs,” in Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence, Monterey, USA, pp.339–349, 2018.
|
[28] |
W. L. Hamilton, R. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, California, USA, pp.1025–1035, 2017.
|
[29] |
J. Chen, T. F. Ma, and C. Xiao, “FastGCN: Fast learning with graph convolutional networks via importance sampling,” The sixth International Conference on Learning Representations, Vancouver, Canada, arXiv:1801.10247, 2018.
|
[30] |
J. Gilmer, S. S. Schoenholz, P. F. Riley, et al., “Neural message passing for quantum chemistry,” in Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, vol.70, pp.1263–1272, 2017.
|
[31] |
F. Monti, D. Boscaini, J. Masci, et al., “Geometric deep learning on graphs and manifolds using mixture model CNNs,” 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, pp.5425–5434, 2017.
|
[32] |
Z. Z. Zhang, H. J. Lin, and Y. Gao, “Dynamic hypergraph structure learning,” in Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, Stockholm, Sweden, pp.3162–3169, 2018.
|
[33] |
S. Bandyopadhyay, K. Das, and M. N. Murty, “Line hypergraph convolution network: Applying graph convolution for hypergraphs,” available at: https://arxiv.org/pdf/2002.03392.pdf, 2020-2-9.
|
[34] |
L. H. Tran and L. H. Tran, “Directed hypergraph neural network,” available at: https://arxiv.org/ftp/arxiv/papers/2008/2008.03626.pdf, 2020-8-9.
|
[35] |
K. Z. Ding, J. L. Wang, J. D. Li, et al., “Be more with less: Hypergraph attention networks for inductive text classification,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Online, pp.4927–4936, 2020.
|
[36] |
Q. M. Li, Z. C. Han, and X. M. Wu, “Deeper insights into graph convolutional networks for semi-supervised learning,” available at: https://arxiv.org/pdf/1801.07606.pdf, 2018-1-22.
|
[37] |
J. Klicpera, A. Bojchevski, and S. Günnemann, “Predict then propagate: Graph neural networks meet personalized pagerank,” International Conference on Learning Representations, New Orleans, USA, pp.1–15, 2019.
|
[38] |
K. Y. Xu, C. T. Li, Y. L. Tian, et al., “Representation learning on graphs with jumping knowledge network,” The Thirty-fifth International Conference on Machine Learning, Stockholm, Sweden, pp.1–14, 2018.
|
[39] |
G. H. Li, M. Müller, A. Thabet, et al., “Deepgcns: Can GCNs go as deep as CNNs?” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), pp.9266–9275, 2019.
|
[40] |
Y. Rong, W. B. Huang, T. Y. Xu, et al., “DropEdge: Towards deep graph convolutional networks on node classification,” International Conference on Learning Representations, Addis Ababa, Ethiopia, pp.1–13, 2020.
|
[41] |
M. Chen, Z. W. Wei, Z. F. Huang, et al., “Simple and deep graph convolutional networks,” in Proceedings of the 37th International Conference on Machine Learning, Vol.119, pp.1725–1735, 2020.
|
[42] |
K. M. He, X. Y. Zhang, S. Q. Ren, et al., “Deep residual learning for image recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, pp.770−778, 2016.
|
[43] |
D. Y. Chen, X. P. Tian Y. T. Shen, et al., “On visual similarity based 3D model retrieval,” Computer Graphics Forum, Wiley Online Library, vol.22, pp.223–232, 2003.
|
[44] |
Z. R. Wu, S. R. Song, A. Khosla, et al., “3D ShapeNets: A deep representation for volumetric shapes,” in Proceedings of 28th IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, pp.1912–1920, 2015.
|
[45] |
H. Su, S. Maji, E. Kalogerakis, et al., “Multi-view convolutional neural networks for 3D shape recognition,” 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, pp.945–953, 2015.
|
[46] |
Y. F. Feng, Z. Z. Zhang, X. B. Zhao, et al., “GVCNN: Group-view convolutional neural networks for 3D shape recognition,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, pp.264–272, 2018.
|
[47] |
J. X. Li, B. M. Chen, and G. H. Lee, “SO-Net: Self-organizing network for point cloud analysis,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, pp.9397–9406, 2018.
|